Abgeschlossene Projekte

Computer Vision

Single-Image Super Resolution for Multispectral Images

In optical remote sensing, spatial resolution is crucial for applications using image data. Post-processing operations e.g., segmentation, classification or object extraction in general can benefit from detailed and distinguishable structures, obtained by a single-image super resolution pre-processing step. This research focuses on single-image super resolution techniques for multispectral image data, using deep-learning approaches, well-established in the field of computer vision. The figure shows simulated results of a single-image super resolution method.

We show that recently introduced state-of-the-art approaches for single-image super resolution of RGB images, making use of Convolutional Neural Networks (CNNs), can successfully be applied to multispectral images after re-training with suitable training data. For training our CNN we use a dataset consisting of publicly available Sentinel-2 single-band images and the SRCNN architecture, presented by Dong et al. (2014). We simulate low-resolution input data through subsequent down- and upsampling of the images and are thus able to implement a self-supervised training stage.

In experiments, we obtained better results compared to competing approaches trained on generic image sets, as well as conventional interpolation methods. We currently work on extending our implementation for end-to-end super-resolution of multiple spectral bands at once and thus exploiting the full information contained in the multispectral remote sensing imagery.

Contact Persons:
Lukas Liebel
Marco Körner

Realtime 3D-Reconstruction from Image Sequences

Stereo reconstruction is one of the most extensively researched topics in computer vision. The central task of stereo matching is the correspondece establishment. The calculated disparity image indicates the difference in locating corresponding pixels. Over the years, numerous algorithms have been proposed. However, a number of difficulties hamper the accuracy as well as the efficiency of stereo matching. Real-time stereo reconstruction systems are much in demand for many applications such as a fast 3D-reconstruction of close range applications and urban areas. Since several years the designs of the Central Processing Unit (CPU) and the Graphics Processing Unit (GPU) are converging in multi-core computing. The modern GPU architecture, like Compute Unified Device Architecture(CUDA) of NVIDIA, enables developers a flexible parallelization of stereo methods. In this project the focus is on both, the high accuracy and the efficient computation time. Various algorithms under the energy minimization framework for stereo vision are compared, analyzed and optimized. In addition, the different optimization aspects like using CUDA are combined as a compact solution to squeeze the potential performance of devices and to shorten the calculation time. The algorithms are applied to close range benchmarks and to remotely sensed images of urban areas.

Contact Person:
Ke Zhu

Cooperation Partner:
German Aerospace Center (DLR)

Tracking of Vehicles in Complex Large Urban Environments

Advanced traffic management systems continue to gain importance due to an increasing road traffic volume in urban areas. For modeling and understanding the entire traffic dynamics, the trajectories of individual vehicles need to be observed. By analyzing these trajectories traffic parameters such as vehicle density, velocities, overtaking maneuvers, merge and exit behavior and traffic congestions for a certain time span can be found. Such results serve as highly relevant input data for traffic modeling and simulation software, for testing the efficacy of traffic control measures and for the input of GIS for traffic monitoring. In this project, a system for tracking individual vehicles in complex urban environments is developed. The input data is obtained from airborne image sequences using commercial medium format frame cameras. These cameras enable the coverage of large areas at a reasonable ground sampling distance. The aim is to monitor large urban environments dealing with complex traffic scenarios to derive area-wide traffic parameters. The vehicle trajectories are obtained by a stochastic tracking method. Multiple vehicles are tracked by individual particle filters while their interaction is handled separately.

Contact Person:
Isabella Szottka

Cooperation Partner:
German Aerospace Center (DLR)

Interpretation of People Trajectories and Event Detection from Image Sequences

In the last decades the number and the size of big events have strongly increased. A big research topic is aimed at the analysis of crowds of people in order to detect anomalous and dangerous events at an early stage. Aerial images can ideally be utilised for the analysis of crowds of people because of the wide viewing angle, such as provided by the 3K camera system of our cooperator German Aerospace Centre (DLR) and UAVs. Our task is to interpret multiple trajectories of people in order to detect concrete mass events as congestions, brawls or panic, and more specialised events comprising only a few trajectories.
The interpretation of multiple trajectories is a complex field of research. Using near-range surveillance cameras, event detection from a single trajectory has already been achieved. Also, a general statement of a crowd performing any abnormal activity in contrast to the usual activity can already be made. However, there is no relevant research in the detection of concrete events using multiple trajectories within wide scale images.
We try to obtain an event detection system by learning predefined scenarios which represent both standard motion patterns as well as concrete dangerous and abnormal situations. The learning and event detection at each time step will be performed using the probabilistic Hidden Markov Models (HMM). In contrast to the common use of HMM in speech recognition or medicine, where one single observation is used to determine the hidden state of the system, we try to combine multiple observations like the speed, the motion direction and the weight between neighboring trajectories. A dynamic network containing all trajectories of the scene at each time step is then aimed to represent the topology as well as current scenario clusters. Observation of these scenario clusters could then be used by rescue teams to infer potential dangerous situations.

Contact Person:
Florian Burkert

Cooperation Partners:
German Aerospace Center (DLR)
Karlsruher Institute of Technology (KIT)

Automatic Vehicle Tracking in Aerial Image Sequences

This project is in cooperation with the ARGOS project of DLR whose goal is to build up an airborne traffic monitoring system for large areas with radar and optical sensors. Automatic derivation of traffic situations and other parameters is meant to support traffic management and decision making in case of catastrophes and major events.
The optical sensor being used is DLR’s 3K camera system. This system with three cameras mounted on one platform delivers image sequences with a frame rate of 1-3 Hz and a ground resolution of 15-40cm, depending on recording mode and flight altitude. The ground coverage of a single image ranges from appr. (500m)²-(1.5km)².
The aim of this research project is to develop algorithms for automatic derivation of traffic parameters in near real time. After their initial detection, vehicles are automatically tracked in the following images of the sequence. From the determined positions, traffic parameters such as velocity, traffic flow, and traffic density are derived. These parameters are linked to road segments of the NAVTEQ road database.
A very robust algorithm has been developed which delivers results with a high correctness. This approach uses image gradient based matching in three consecutive images with equal time distance and statistical motion analysis. Another algorithm, a problem constrained adaptation of the particle filter, is subject of testing.
Furthermore, motion information derived from the tracking results is subsequently used to refine the detection results and thus the traffic density. This task is achieved by statistical evaluation of the vehicles’ velocities at each road segment to eliminate possible false detections which appear as standing objects in the tracking results.

Contact Person:
Dominik Lenhart

Cooperation Partner:
German Aerospace Center (DLR)

Straßenextraktion aus digitalen Luftbildern

Das System wurde für Straßenextraktion in offenen, ländlichen Gebieten entwickelt. Als Inputdaten für die automatische Extraktion dienen gescannte, panchromatische Luftbilder mit einer Auflösung von circa 0,2 bis 0,5 Metern.
Die Leistungsfähigkeit des Systems wurde durch eine externe Evaluierung der Ergebnisse belegt. Dabei zeigte sich, dass das entwickelte System derzeit zu den besten der Welt gehört. Ein anderes Ergebnis war jedoch auch, dass der operative Einsatz vollautomatischer Systeme in naher Zukunft als ein zu ehrgeiziges Ziel erscheint. Abgesehen von Fällen, in denen die qualitativen Anforderungen weniger hoch sind, wird immer eine Nachbearbeitung durch einen Menschen notwendig sein, da für automatisch generierte Ergebnisse keine 100%ige Vollständigkeit und Zuverlässigkeit garantiert werden kann. Dieses Projekt fand seine Fortsetzung in der Forschungsarbeit zur automatischen Straßenextraktion in Siedlungsgebieten und in einem Projekt zur semi-automatischen Straßenextraktion, welches mehr auf praktische Anwendungen abzielte.

Contact Person:
Albert Baumgartner

Cooperation Partner:
Institut für Informatik, Forschungsgruppe Bildverstehen

Synthetic Aperture Radar

Singular Value Decomposition for Persistent Scatterer Interferometry

In Permanent Scatterer Interferometry time coherent pixels are considered in a stack of differential interferograms. After the elimination of atmospheric errors, estimates for topography and deformation are evaluated between these scattering points. Eventually, the final result yields the absolute values giving information about displacement over long time periods at the ground, what is of special interest for urban areas. One problem to be solved for the estimation is that not all relative estimates are reliable because of errors in phase unwrapping or in the functional model. With weighted singular value decomposition it is possible to estimate the points robust in the sense of a least square adjustment. Moreover, a major advantage of the concept is the possibility of multi threaded processing what saves computing time for large networks.

Contact Person:
Werner Liebhart

Tomopolis - Generation of 3D Structures and Ground Control Points from SAR Data in Urban Areas

Synthetic aperture radar delivers a two-dimensional reflectivity image of the earth's surface, which is a cylindrical projection of the three-dimensional volume to the two-dimensional image space. This kind of imaging is unambiguous only, if the slope of terrain heading towards the radar is smaller than the look angle θ (otherwise layover occurs) or if the slope of terrain heading away from the sensor is smaller than 90°- θ (otherwise shadow occurs).
In addition to the reflectivity map, across-track interferometers like SRTM or TanDEM-X are capable of creating height maps from the parallax of two images with slightly different looking directions. But ambiguities in layover areas cannot be solved. This is the reason for erroneous or even useless height models in urban areas. Hence, the standard products of TanDEM-X cannot be used for an accurate and reliable derivation of 3D objects and properties like building shape and height in cities.
The main goal of this project is the improvement in accuracy of localization and deformation estimation of objects in urban areas by adding several motion- and atmosphere-free TanDEM-X acquisitions to existing TerraSAR-X data stacks. The potentials of TerraSAR-X and TandDEM-X missions are fully exploited for an analysis of 3D structures and objects by tomographic SAR methods and PSI in order to by-pass geometrical limitations (such as layover) of radar, and therefore to enhance the precision and quality of TanDEM-X elevation models.

Contact Persons:
Dr. Stefan Gernhardt
Dr. Michael Eineder

Cooperation Partner:
German Aerospace Center (DLR)

Radar Simulation for Understanding High Resolution Spaceborne SAR Images

High resolution space borne radar satellite missions like TerraSAR-X, TanDEM-X or Cosmo-SkyMed provide SAR data having a spatial resolution up to one meter. Nevertheless, the visual interpretation of prominent signatures on SAR images is still challenging due to geometrical distortions such as foreshortening or layover. In urban areas containing man-made objects, it is of special interest whether image signatures are caused by deterministic or random scattering of radar signal. Moreoever, the question has to be answered whether image signatures related to signal multiple reflections really represent physical objects. The investigation of deterministic signal responses may help to understand the nature of salient and long term stable pixels in SAR data. For instance, these pixels are used in persistent scatterer interferometry (PSI) for detecting ground displacements in the range of some millimeters per year.
For supporting the interpretation of spaceborne SAR data, RaySAR, a SAR simulator based on the open-source ray tracer POV-Ray, has been developed. It enables to simulate signal reflections at man-made objects using adapted render methods. The focus is on single urban objects whose geometry has to be represented by detailed 3D models. Ray tracing algorithms are adapted in order to provide output data in the SAR imaging geometry. Besides the simulation of reflectivity maps containing all signal responses, different reflection levels can be assigned to separate image layers. Moreover, specular reflections are identified based on a geometrical analysis of reflection processes.
Signal samples are simulated in three dimensions, i.e. azimuth, range, and elevation. Hence, scatterers within the same resolution cell are spatially separated. Based on this knowledge, layover phenomena occurring in urban areas can be resolved and explained. For analyzing the origin of salient SAR image signatures, the corresponding reflecting surfaces can be marked on the 3D object model. Moreover, the 3D positions of signatures can be mapped on the 3D model of simulated scenes. Thereby, the potentials and limitations of interferometric SAR systems with regard to the 3D localization of scatterers can be analyzed and better understood.

Contact Persons:
Dr. Stefan Auer

Cooperation Partner:
German Aerospace Center (DLR)

3D Localization and Motion Estimation of Persistent Scatterers in High Resolution SAR-Interferometry

Deformation monitoring in urban areas is an important task that can be carried out by Persistent Scatterer Interferometry (PSI) that has been invented in the late 1990s. For this special processing methodology several SAR images have to be acquired, like obtainable by ERS, ENVISAT or TerraSAR-X, for example. In this so called stack of images single points are identified that show a stable point like scattering behaviour in all acquisitions that cover the desired time span, which represent the Persistent Scatterer (PS) candidates. In the following procedure the phase values of these points are evaluated in time and spatial domain and finally the deformation signal at each position is estimated together with the 3D position of the phase center of the PS. With the launch of the German SAR satellite TerraSAR-X the potential of PSI increased enormously, as this sensor is capable of acquiring radar images with a resolution of up to 1 meter or even better. These high resolution spotlight data takes show incredible details in urban areas and the number of detectable potential PS increased heavily compared to previous images from ERS or ENVISAT. Hbf In this project Persistent Scatterer Interferometry is applied to stacks of high resolution TerraSAR-X spotlight data and final positions of PS are estimated in 3D space. With the help of high resolution digital elevation models (DEM), which are generated from airborne laser or optical (stereo) data, the obtained PS can be evaluated with respect to their position on the buildings and therefore the - up to now mostly unknown - nature of the PS can be investigated as soon as the exact positions are known. Furthermore the high density of PS can be used to improve the results from motion estimation: All PS found on one single object of interest are grouped together - by evaluating additional sources of information, like a GIS, land register maps or laser DEM - and the estimation of the displacement is carried out for the whole group, using a restricted least-squares adjustment. This leads to an increased accuracy for the motion estimates, including removal of outliers, by taking the spatial distribution of the PS into account. The main goal of this research are the investigation and increase in accuracy of the positions of PS in urban areas to better understand the nature of the scattering effects and an improvement in PSI for the motion estimates by using additional constraints extracted from high resolution radar images.

Contact Person:
Dr. Stefan Gernhardt

Cooperation Parner:
German Aerospace Center (DLR)

4D-SAR – Tomographic and Simulation-Based Learning Methods

Since the middle of 2007, Synthetic Aperture Radar (SAR) satellites such as TerraSAR-X are available with up to 1 m spatial resolution (VHR). The research project “4D-SAR” aims at the development of methods for evaluating the potential of modern VHR SAR data for a 4D (spatial-temporal) analysis of built-up infrastructure. Two research directions are pursued:Coherent approaches such as SAR interferometry, persistent scatterer interferometry, and SAR tomography are suitable for reconstructing the 3D shape of buildings. Moreover, relative deformations can be estimated in the range of millimeters. For extending the coherent approaches, new methods for spectral analysis and data fusion are developed. A methodical unification of different coherent approaches to a generalized SAR tomography concept has to be realized.
Non-coherent approaches make use of the image content of data, mostly as a function of the incidence angle. They enable the detection of prominent changes like the destruction of buildings or façade structures. To this end, SAR radargrammetry and computer vision methods are applied. In this regard, simulation methods play a key role, for instance to forecast SAR images of building models, which are available or are generated by coherent approaches beforehand.

Contact Person:
Dr.-Ing. Stefan Auer

Cooperation Partners:
Cooperation Partners: Karlsruher Institut für Technologie, Institut für Photogrammetrie und Fernerkundung
Leipniz Universität Hannover, Institut für Photogrammetrie und Geoinformation
Technische Universität Berlin, Computer Vision & Remote Sensing
Technische Universität München, Fachgebiet Photogrammetrie und Fernerkundung

Exupéry - Mobile Volcano Fast Response System (VFRS)

Volcanic unrest and volcanic eruptions are one of the major natural hazards next to, earthquakes, floods, and storms. The core of a mobile Volcano Fast Response System was proposed to develop that can be quickly installed in case of a volcanic crisis or volcanic unrest. The system builds on established volcanic monitoring techniques such as seismicity and ground deformation but also includes new tools like geoelectic soundings and new spectroscopic methods to monitor gas chemistry. One of the other cornerstone of the systems is the direct inclusion of satellite based observations to deduce ground deformation, detect hazardous gas emissions and monitor thermal activity. All this information will be collected by a central data base, which is the backbone of the system.
WP2 Satellite observations – Ground deformation measurement with SAR Interferometry (InSAR): SAR Interferometry provides the chance to detect the large scale deformation caused by volcanic activities with centimetre and even millimeter accuracy over local areas. The accuracy of InSAR is limited by temporal decorrelation of the surface and by electromagnetic path delay variations. The Persistent Scatterer Interferometry (PSI) method uses massive data stacking, selection of long term stable points and motion models in order to reduce the estimation error to the sub-millimeters level. In our workpackage, multiple satellite missions will be used in order to improve the temporal resolution.

Contact Persons:
Xiaoying Cong
Dr. Michael Eineder

Cooperation Partners:
German Aerospace Center (DLR)
BGR (Hannover)
GFZ Potsdam
IFM-GEOMAR
University of Potsdam
University of Hamburg
Darmstadt University of Technology
Ludwig Maximilians University of Munich

DeSecure - Bundle Project for Satellite-Supported Information Mining for Crises Applications

The bundle project “DeSecure” is initialized by the Space Agency of the German Aerospace Center (DLR). The general aim of the project is to improve the satellite-based information mining for crises applications in Germany including the complete processing cycle.The bundle project DeSecure will build up a complete framework to provide all relevant information regarding the extent and impact of crises scenarios within shortest time-frames. Thus, a special emphasis is on the exploitation of multi-sensorial optical and SAR satellite images. The current and, in particular, the future availability of these sensors allows to acquiring such images nearly everywhere and at any time.
The task of the Remote Sensing Technology department of TU München is the verification of intact infrastructure being an important issue for the management of civil crises, e.g. caused by floodings or earthquakes. The goal is the development of a system for the automated detection of intact roads from multi-sensorial imagery being required for the coordination of rescue teams and the implementation of emergency actions.

Contact Person:
Daniel Frey

Cooperation Partners:
German Aerospace Center (DLR)
Definiens AG
GAF AG Infoterra GmbH
PRO DV Software AG
RapidEye AG
Technische Universität Berlin

IGGSE Projekte

IGSSE "SafeEarth"

The project “SafeEarth” is part of the TU München International Graduate School of Science and Engineering (IGSSE), funded by the Excellence Initiative of the German federal and state governments. The project is directed towards the fast, integrated, geometrically and semantically correct interpretation of multi-sensorial airborne and space borne images. The methods will be developed for optical and radar data with the goal to support a fast reaction after natural disasters. We focus in particular on the automatic assessment and characterization of infrastructural objects like roads and buildings due to their importance for the immediate planning and implementation of emergency actions.

To this end, three essential components of image understanding science will be brought together, which reflect also the expertise of the cooperating partners:

  • Sensor technology and physical sensor modeling (DLR)
  • Mathematical and statistical modeling in image processing (DTU)
  • Automatic image understanding and object extraction (TUM)

The IGSSE project “SafeEarth” deals with the development of models and methods to extract information from remotely sensed imagery with the ultimate goal to supporting natural disaster management and hazard preparedness.
Special focus is on the assessment of road network after flooding because of their major relevance to provide on-site assistance. Probabilistic graphical models are used assessing roads concerning their trafficability after flooding. Graphical models are used to combine simulation and observation improving the assessment of roads. The simulation of flooded roads can be established by means of Digital Elevation Models (DEM). The observations are derived from remote sensing images. The graphical model built a statistical framework which combines the images and DEM.

Contact Person:
Daniel Frey

Cooperation Partners:
German Aerospace Center (DLR)
Technical University of Denmark (DTU)

IGSSE "Dynamic Earth"

The project “Dynamic Earth” is part of the TU München International Graduate School of Science and Engineering (IGSSE), funded by the Excellence Initiative of the German federal and state governments. The project is directed towards the mechanistic understanding of three and four dimensional Earth System processes incorporating the essential approaches fundamental to modern science: experiment, observation and simulation. The focus will be put on integrating (data-driven) observations from space into model driven parameters of earth system processes in close feedback loops. To this end, an integrated approach will be investigated, which is in particular designed to assimilating data into models on one hand, and regularization of data based on knowledge provided by models on the other hand.
The research envisioned in this proposal will be exemplified by the observation, modeling, and understanding of two Earth System processes that have immediate as well as long-term influence on humanity: the mechanics of active volcanoes, e.g., posing a threat for cities, and the dynamics of polar ice caps indicating long-term changes of our climate. Methodologically, the proposed research will integrate spaceborne Radar Earth Observation techniques with numerical modeling of Earth System processes on the surface and in the interior of our planet – like glacial flow. Remote Sensing Department of TUM will contribute to the research team through the development of spaceborne radar methods specifically designed to monitor geophysical processes.

Contact Persons:
Xiaoxiang Zhu
Michael Eineder

Cooperation Partners:
German Aerospace Center (DLR)
Fachgebiet Computational Mechanics
Lehrstuhl für Statik, CeSIM-GS

IGSSE "4D City"

Static 3D city models are well established for many applications such as architecture, urban planning, navigation, tourism, and disaster management. The recent launches of very high resolution (VHR) Synthetic Aperture Radar (SAR) Earth observation satellites, like the German TerraSAR-X, provide for the first time the possibility to derive both shape and deformation parameters of urban infrastructure on a continuous basis. The proposal “4D City” is directed towards 4D (space-time) urban infrastructure monitoring and visualization by fusion of multiple sensors: Differential Tomographic SAR (D-TomoSAR), stereo-optical images and LiDAR (Light Detection and Ranging). The different sensors complement to each other: stereo-optical data have the best visual interpretability, LiDAR provides very accurate Digital Elevation Models (DEM), and D-TomoSAR as well as the related Persistent Scatterer Interferometry (PSI) are the only methods to provide the dynamic component of buildings with up to millimeter accuracy, e.g. seasonal thermal dilation, structural deformation, or subsidence due to groundwater extraction or underground construction. TomoSAR Since VHR D-TomoSAR of urban infrastructure is a new research area, the development and improvement of appropriate robust TomoSAR algorithms will be the first focus of the project. Second, data fusion for combining the strengths of different sensors will be developed both on the geodetic and the semantic levels. Finally, a particular challenge will be the user specific visualization of the 4D multi-sensor data showing the urban objects and their dynamic behavior. Visualization must handle spatially anisotropic data uncertainties and possibly incomplete dynamic information. It may also integrate some of the data fusion steps. Different levels of user expertise must be considered. 4D_Tomo_1 The research envisioned in this proposal will lead to a new kind of city models for monitoring and visualization of the dynamics of urban infrastructure in a very high level of detail. The change or deformation of different parts of individual buildings will be accessible for different types of users (geologists, civil engineers, decision makers, etc.) to support city monitoring and management as well as risk assessment. As shadow appears in a single stack of SAR image, the final shadow-free 4D city model will be generated by fusion of results from multiple stacks of SAR image. Preliminary results shows on the left is the fused point clouds from ascending and descending stacks, which can cover almost the front and back facades.

Contact Person:

Yuanyuan Wang

Cooperation Partners:

German Aerospace Center (DLR)

State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing (LIESMRS), Wuhan University, China

Weitere Schwerpunkte

Monitoring of Volcanic SO2 Emissions using the GOME-2 Satellite Instrument

Volcanic eruptions pose a major danger to people living in the vicinity of volcanoes. With the fast growing population more and more people live close to active volcanoes and the growing developments in infrastructure and transportation systems make societies more vulnerable. The region surrounding the volcano is directly affected by the eruption, e.g. due to lava flows, pyroclastic flows and lahars. Besides being a direct danger to the local population, volcanic eruptions have also proven to be a mayor hazard to aviation. The emissions of trace gases, such as SO2, are an important indicator for volcanic activity. Volcanic eruptions and passive degassing of volcanoes are the most important natural source of sulfur dioxide. During an eruption, SO2 is the third most abundant gas found in volcanic plumes after H2O and CO2. Changes in SO2 flux can be a precursor for the onset of volcanic activity.
Slant Columns of SO2 are determined from solar backscatter measurements of the GOME-2 satellite instrument using the well established Differential Optical Absorption Spectroscopy (DOAS) method in the wavelength region between 315 – 326 nm. The slant columns are then converted to geometry independent vertical columns (VC) through division by an appropriate air mass factor (AMF):

VC=SC/AMF

The SO2 columns retrieved are used for several early warning services related to volcanic hazards, such as the Support to Aviation Control Service (SACS) which is part of the GMES (Global Monitoring for environment and security) Service Element for the Atmosphere (PROMOTE) and the mobile volcano fast response system, Exupéry, within the BMBF GEOTECHNOLOGIEN program.

Contact Person:
Meike Rix

Automatic Analysis of Traffic Scenario in Urban Areas using Airborne LiDAR Data

This project deals with automatic detection and motion indication of vehicles from LiDAR data for traffic scenario analysis of urban areas. The research work is mainly motivated by following two desires:

  • to improve the completeness of vehicle detection due to the penetration ability of the laser ray through tree canopies
  • to indicate the vehicle motion and estimate the velocity based on the special scanning mechanism of airborne LiDAR

Because of the required automation of the interpretation and the complexity of vehicle objects in LiDAR data, this task is coupled with the challenging scientific problems of the 3D semantic inference in irregularly sampled point clouds of urban environments.
To manage the enormous complexity of urban scenes in LiDAR data, the system is based on context-guided and data-driven analysis strategies. The extracted point sets of vehicle can further be delivered to the parameterization process for determining its dynamical status. Our recent results show that this approach achieved a relatively reliable extraction of vehicles from cluttered urban areas using one-pass or multi-strip LiDAR data, even if the point density is not high. The vehicle motion can be better resolved when sampled with a higher point density than 2pts/m2.

Contact Person:
Yao Wei