GeoWorld

GeoWorld January 2013

Issue link: https://read.dmtmag.com/i/103965

Contents of this Issue

Navigation

Page 19 of 32

Space is big. Really big. You just won't believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it's a long way down the road to the chemist's, but that's just peanuts to space. - Douglas Adams, "The Hitchhiker's Guide to the Galaxy" A s Light Detection and Ranging (LiDAR) becomes faster and cheaper, the resulting deluge of data has outpaced the industry's ability to use them. The complexity and sheer volume of LiDAR data makes exploiting them tremendously challenging. This article overviews the principal causes of those challenges and notes the direction of the industry's response. But consider this: we've been here before. Landsat (then called "Earth Resources Technology Satellite") was launched in the early 1970s and ushered in an era of increasing availability of geospatial raster images. Early desktop GISs (e.g., ArcInfo, MapInfo) came on the scene in the mid 1980s, but they couldn't begin to deal with large raster volumes until enabling image-compression technologies came of age in the 1990s: JPEG (first specified in 1986), MrSID (1992) and later JPEG 2000. Today, most popular raster-exploitation platforms (e.g., Google Earth, Bing Maps) rely on the Internet and massive server farms to make the few small tiles of interest available while the vast majority of the imagery remains accessible but not accessed. areas frequently are much larger. Datasets in the hundreds of gigabytes aren't uncommon. The first difficulty this size problem presents, of course, is where to put the data. Even in an age when consumers buy terabyte-sized hard drives at retail stores, those who produce LiDAR data for a living have trouble storing the stuff. A medium-sized vendor in Oregon recently explained that it weekly buys terabyte drives, and it was severely impacted in 2011 when flooding in Thailand made such drives difficult to acquire. The size problem for such vendors is aggravated by operational requirements to store LiDAR datasets are big. Really big. They're big because they cover huge areas at high point densities, and each point record contains a lot of data. The specifics vary from project to project, but think of LiDAR at onemeter post-spacing as roughly equivalent to a raster image with pixels one foot per side. A 1,000-squarekilometer study area might weigh in at about 26GB. Due to the economics of how the data are collected (expensive to get started, cheap to keep flying), study Imagery/LIDAR Special Issue MERRICK Disk-Space Shortages and Slow Apps Figure 1. A Nominal Size-of-Dataset vs. Sizeof-Study plot shows points in the lower left corresponding to study areas covering Mount Rainier, Seattle, New Orleans, Los Angeles and Tokyo. J A N U A R Y 2 O 1 3 / W W W . G E O P L A C E . C O M 19

Articles in this issue

Archives of this issue

view archives of GeoWorld - GeoWorld January 2013