GeoWorld

GeoWorld January 2013

Issue link: https://read.dmtmag.com/i/103965

Contents of this Issue

Navigation

Page 20 of 32

Data Issues data that come off the sensors as well as backups at varying stages of processing in addition to the final point-cloud delivery. After storage, of course, the data need to be delivered, and that's another issue. Some laugh when talking about an old station wagon carrying tapes and driving down the interstate having higher bandwidth than a high-speed network. But it's less funny when considering that the same vendor in Oregon spends almost $200,000 a year on express delivery charges to ship hard drives to and from customers. If a typical desktop connection at an office provides roughly an 8MB/second transfer rate to the Internet, then the 238GB dataset from Figure 1 (the Houston/ Galveston Area Council) would take four days to transfer, assuming the connection holds. Finally, after the data get where they need to be, applications that consume the data can only hold a tiny fraction of the dataset in memory at one time; before they can do any processing, they need some way to get the points of interest. For huge datasets, that's a significant challenge and the reason why many applications that perform such processing are notoriously slow. Spatial Indexing To address these concerns, many software providers index and "pre-pyramid" the data as part of a preprocessing or "ingestion" phase. Indexing assumes that there's some spatial ordering to the data, and it creates a "lookup" table which allows the application to jump directly to points of interest, skipping over those that needn't be considered. Although indexing provides the crucial fix for getting data from a small extent, it's not helpful when considering a much larger extent. Again, the entire large extent can't be held in memory, so a thinned-down version of the data is needed. Pre-pyramiding creates this low-resolution version of the data beforehand, allowing the application to access it, rather than the full dataset, when larger extents are required. With an eye toward the future, the situation is getting worse. Sensors are getting faster, point densities are getting higher and, with the prospect of full-waveform data, the point data themselves are gaining weight. The newest sensors claim pulse rates of more than 400,000 pulses per second (up from 50,000 just a few years ago). The aforementioned one-point/meter nominal pulse should probably be doubled to create high-quality digital terrain models (DTMs) under canopy cover. And the densities of mobile and stationary LiDAR are orders of magnitude larger than those of airborne LiDAR. Early Industry Responses Traditionally, mainstream GIS applications have made limited use of LiDAR data, opting instead to model elevation data as vector contours, triangular irregular networks (TINs) and raster digital elevation models (DEMs). A few bold souls imported LiDAR data into ArcGIS as point features or a terrain dataset, but, in general, the size and complexity of the data resulted in performance issues that precluded its use in mainstream applications. Instead, LiDAR-specific solutions have emerged to allow direct visualization/measurement within the point cloud and, significantly, produce derivatives that can be consumed by traditional GIS applications. Overwatch's Lidar Analyst is one such product, used primarily to extract vector features from aerial LiDAR datasets. More recently, other technologies have emerged to bridge the GIS/point cloud data gap. For example, QCoherent's LP360 for ArcGIS emerged in 2006 and was possibly the first application that made LiDAR data accessible to common GIS analysts using ArcMap. The LP360 extension employs an ArcMap data layer to access points directly from LAS files—the standard format for storing raw airborne LiDAR data—enabling LiDAR data to be combined with data in any format supported by ArcGIS. Very recently, Esri has begun direct support using the LAS dataset, new at ArcGIS 10.1. A LAS dataset stores references to one or more LAS files on disk as well as to additional surface features, enabling geospatial users to examine LAS files in native format as well as work with points classified into different feature types. Compression Figure 2. Overwatch's feature-extraction technology is demonstrated in LiDAR Analyst. 20 G E O W O R L D / J A N U A R Y 2 O 1 3 Addressing the size problem directly, two compression solutions have emerged that reduce the size of the point cloud itself without thinning, degradation or other Imagery/LIDAR Special Issue

Articles in this issue

Archives of this issue

view archives of GeoWorld - GeoWorld January 2013