LOGIN TO YOUR ACCOUNT

Username
Password
Remember Me
Or use your Academic/Social account:

CREATE AN ACCOUNT

Or use your Academic/Social account:

Congratulations!

You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.

Important!

Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message

CREATE AN ACCOUNT

Name:
Username:
Password:
Verify Password:
E-mail:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Smith, W.; Walker, A. S.; Zhang, B. (2012)
Publisher: Copernicus Publications
Languages: English
Types: Article
Subjects: TA1-2040, T, TA1501-1820, Applied optics. Photonics, Engineering (General). Civil engineering (General), Technology
The market for real-time 3-D mapping includes not only traditional geospatial applications but also navigation of unmanned autonomous vehicles (UAVs). Massively parallel processes such as graphics processing unit (GPU) computing make real-time 3-D object recognition and mapping achievable. Geospatial technologies such as digital photogrammetry and GIS offer advanced capabilities to produce 2-D and 3-D static maps using UAV data. The goal is to develop real-time UAV navigation through increased automation. It is challenging for a computer to identify a 3-D object such as a car, a tree or a house, yet automatic 3-D object recognition is essential to increasing the productivity of geospatial data such as 3-D city site models. In the past three decades, researchers have used radiometric properties to identify objects in digital imagery with limited success, because these properties vary considerably from image to image. Consequently, our team has developed software that recognizes certain types of 3-D objects within 3-D point clouds. Although our software is developed for modeling, simulation and visualization, it has the potential to be valuable in robotics and UAV applications.

The locations and shapes of 3-D objects such as buildings and trees are easily recognizable by a human from a brief glance at a representation of a point cloud such as terrain-shaded relief. The algorithms to extract these objects have been developed and require only the point cloud and minimal human inputs such as a set of limits on building size and a request to turn on a squaring option. The algorithms use both digital surface model (DSM) and digital elevation model (DEM), so software has also been developed to derive the latter from the former. The process continues through the following steps: identify and group 3-D object points into regions; separate buildings and houses from trees; trace region boundaries; regularize and simplify boundary polygons; construct complex roofs. Several case studies have been conducted using a variety of point densities, terrain types and building densities. The results have been encouraging. More work is required for better processing of, for example, forested areas, buildings with sides that are not at right angles or are not straight, and single trees that impinge on buildings. Further work may also be required to ensure that the buildings extracted are of fully cartographic quality. A first version will be included in production software later in 2011.

In addition to the standard geospatial applications and the UAV navigation, the results have a further advantage: since LiDAR data tends to be accurately georeferenced, the building models extracted can be used to refine image metadata whenever the same buildings appear in imagery for which the GPS/IMU values are poorer than those for the LiDAR.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • Douglas, J., Antone, M., Coggins, J., Rhodes, B.J., Sobel, E., Stolle, F., Vinciguerra, L., Zandipour, M., Zhong, Y., 2009.
    • (eds.), Automatic Target Recognition XIX, Proceeding of the SPIE, Vol. 7335, pp. 6-11.
    • Zhang, B, 2008. Navigating a robot in complex terrain and urban environments with real-time 3-D mapping, E&IS Fellows Periodical, BAE Systems, September 2008.
    • Zhang, B, Smith, W., 2010. From where to what - image understanding through 3-d geometric shapes, ASPRS 2010 Annual Conference, San Diego, California, USA, unpaginated CD.
    • Zhang, B., Walter, M., 2009. Stereo image matching - breakthroughs in automated change detection, Earth Imaging Journal, 6(3), pp. 16-21.
  • No related research data.
  • No similar publications.