GeoWorld August 2011

Issue link:

Contents of this Issue


Page 11 of 31

EDGENODES M Why Does Mapping Need Robots? apping by robots is a major new research area, because robots can do the mapping humans can't or won't do. We need robots to map locations where it's unsafe for humans to enter, such as the Fukushima Daiichi nuclear power plant damaged by the tsunami fol- lowing the Japanese earthquake on March 11, 2011. Robots also can be used to map areas more quickly for time-critical events. A third area of application is to BY NIGEL WATERS use virtual robots to map simulated environments. The robot itself needs to comprehend and map its environment to move through it, a process known as simultaneous location and mapping (SLAM). Mapping the Academic Way Before a robot can be taught to map a new environ- ment, it must be able to navigate unknown territory. But before it can do this, it must be able to communicate. The University of Queensland's Ruth Schulz is a leading expert in this field. She and her colleagues are responsible for the Lingodroid Project, which seeks to develop robot languages so robots can interact with each other. This is a somewhat less-challenging goal than having robots learn human languages. In Schulz's world, physically or virtually manifested robots play games with each other to develop the concepts of direction, place, duration and distance. This allows the robots to characterize the time-space environments in which they're to operate. The Lingodroid Project is a stepping stone to Nigel Waters, editor of Cartographica, is a professor of geography and director for the Center of Excellence for Geographic Information Science, George Mason University; e-mail: 12 allowing robots to develop and use their own version of Wikipedia. This is known as RoboEarth (www.roboearth. org) and will allow robots worldwide to learn from each other. A detailed description of the RoboEarth project may be found in the work of Markus Waibel and his colleagues (2011), which, although breathtaking in its scope, only envisages the use of CAD models and inexplicably neglects GIS representations. Much of Schulz's work is grounded in the pioneer- ing geographic information science of Donna Peuquet (2002) and the work of neuroscientists (see Waters, 2009). The University of Queensland researchers have their robots operate in three environments of increasing sophistication and complexity: GEO W ORLD / AUGUST 2O11 1. A grid world where spatial ambiguities are minimized and scientists focus on algorithm development. 2. A simulated world that introduces a certain level of ambiguity. 3. A real-world office environment with a full range of spatial ambiguities. Particularly inspiring is the work of Schulz's colleagues, Michael Milford and Gordon Wyeth, with RatSLAM, which uses neuroscience research into the rodent hippocampus. This research established that the rodent brain has place and head-direction cells. The RatSLAM software used this analogy to develop a geometric representation of the environment that permitted navigation through a suburb. Mapping the Kinect Way The research frontier can be moved along by the academic community, big industry or "citizen science." As Jason Tanz noted in a June 2011 article for Wired magazine, when Microsoft released the Kinect add-on for the Xbox 360 on Nov. 4, 2010, it provided robot hobbyists with an inexpensive, lightweight imaging device that recorded 3-D views in real time. In other words, this was SLAM technology for the rest of us. The hacker community (the "modders" of the world) and citizen scientists (the tinkerers among us) saw the possibilities immediately. Almost at once, a team from Berkeley integrated a Kinect with a micro-quadrotor helicopter that could fly around a room navigating autonomously. Soon after, a group from the University of Bundeswehr Munich had a robot car negotiating an obstacle course, and this was followed by a group at Warwick University that developed a robot for negotiat- ing obstacles such as the rubble from a landslide that might hide buried survivors. If a helicopter could navigate a room, a car could see and avoid obstacles, and a robot could walk around rubble, then each of those devices could map their environment. How Kinect Works The Kinect was designed to allow game players to experience a controller-free interface, where the game's progress was directed by body movements. To achieve this, the $150 Xbox add-on incorporated a depth camera that uses scattered light from an infrared light emitter to build a 3-D map of objects in front of it. These data are enhanced by a color camera, four microphones that capture sound, and a tilt motor to center and target data acquisition. All the data then are fed to the Xbox through an unencrypted USB link. According to Tanz, this lack of encryption, spurred by a contest sponsored by open-source hardware company Adafruit Industries, allowed Spanish hacker

Articles in this issue

Links on this page

Archives of this issue

view archives of GeoWorld - GeoWorld August 2011