Photo courtesy of MIT Sea Grant College Program
Since the 1970s, when early autonomous underwater vehicles (AUVs) were developed at MIT, Institute scientists have tackled various barriers to robots that can travel autonomously in the deep ocean. This four-part series examines current MIT efforts to refine AUVs鈥 artificial intelligence, navigation, stability and tenacity.
Imagine dropping an underwater vehicle into the ocean and having it survey the ocean floor for debris from an accident or examine a ship鈥檚 hull for signs of damage. Without any outside guidance or prior knowledge, the vehicle would traverse the target area in a methodical fashion, never repeating itself or going astray, all the while generating a map that shows the surface of interest.
An MIT team has developed advanced mathematical techniques that enable such a scenario to occur 鈥 even when the area being examined is large, complex and cluttered, and the information coming from the vehicle鈥檚 sensors is not always clear and accurate.
鈥淎 big problem for an autonomous underwater vehicle is knowing where it鈥檚 been, where it is now and where it should go next 鈥 without any outside help,鈥 says John J. Leonard, a professor of mechanical and ocean engineering and a member of the MIT Computer Science and Artificial Intelligence Laboratory. Navigating underwater is tricky. Radio waves don鈥檛 propagate through seawater, so an AUV can鈥檛 use GPS as a guide. Optical methods don鈥檛 work well. Computer vision is difficult, even for terrestrial robots; water reflects and refracts light in complex ways, and visibility may be poor due to murkiness and turbidity.
What鈥檚 left? Sound waves, which can be monitored by acoustic sensors. To help an underwater vehicle navigate, a deepwater energy company may drop a network of acoustic transponders onto the seafloor. The vehicle exchanges acoustic 鈥減ings鈥 with the transponders, generating data with which it can calculate its position. But sometimes the signal bounces off extraneous objects, producing inaccurate data. Sometimes several robots share multiple transponders, leading to confusion. And sometimes deploying enough transponders to cover a sufficiently large area is prohibitively expensive.
鈥淪o here鈥檚 the challenge. You want to place the AUV at an unknown location in an unknown environment and, using only data from its acoustic sensors, let it incrementally build a map while at the same time determining its location on the map,鈥 Leonard says. Robot designers have studied the so-called mapping problem for decades, but it鈥檚 still not solved. As Leonard notes, it鈥檚 a chicken-and-egg problem: You need to know where you are to build the map, but you need the map to know where you are.
To illustrate how robotic mapping works 鈥 and doesn鈥檛 work 鈥 Leonard considers the aftermath of a hypothetical accident. The seabed is covered with debris, and officials need to figure out where it all is. Ideally they鈥檇 send down an AUV and have it cruise back and forth in a lawnmower-type pattern, recording information about where it is and what it sees.
One conventional way of accomplishing that task is using dead reckoning. The AUV starts out at a given position and simply keeps track of how fast and in what direction it鈥檚 going. Based on that information, it should know where it is located at any point in time. But the calculations to determine its position quickly become wrong, and over time, the error grows 鈥渨ithout bounds.鈥 Leonard likens it to mowing the lawn blindfolded. 鈥淚f you just use dead reckoning, you鈥檙e going to get lost,鈥 he says. Using expensive accelerometers, gyroscopes and other equipment will make the error grow more slowly, but not eliminate it entirely.
So how can an AUV use poor data from relatively inexpensive sensors to build a map? To tackle that problem, Leonard and his team have been using a technique called Simultaneous Localization and Mapping, or SLAM. With this approach, the AUV records information, builds a map and concurrently uses that map to navigate. To do so, it keeps track of objects it observes 鈥 in the accident example, say, a particular piece of debris on the seafloor. When the AUV detects the same object a second time 鈥 perhaps from a different vantage point 鈥 that new information creates a 鈥渃onstraint鈥 on the current map. The computer program generating the map now adds that object and at the same time optimizes the map to make its layout consistent with this new constraint. The map adjusts, becoming more accurate.
鈥淪o you can use that information to take out the error, or at least some of the error, that has accrued between the first time you saw that object and the next time you saw it,鈥 Leonard says. Over time, the program continues to optimize the map, finding the version that best fits the growing set of observations of the vehicle鈥檚 environment.
In some cases, the AUV may see the same object again just a few minutes later. Identifying it as the same object is easy. But sometimes 鈥 especially when surveying a large area 鈥 the AUV may see the same object early on and then again much later, possibly even at the end of its travels. The result is a 鈥渓oop closing鈥 constraint. 鈥淭hat鈥檚 a very powerful constraint because it lets us dramatically reduce the error,鈥 Leonard says. 鈥淭hat helps us get the best estimate of the trajectory of the vehicle and the structure of the map.鈥
While SLAM has been in use for several decades, the Leonard group has made significant advances. For example, they鈥檝e come up with new computational algorithms that can calculate the most likely map given a set of observations 鈥 and can do it at high speed and with unprecedented accuracy, even as new sensor information continues to arrive. Another algorithm can help determine whether a feature that the robot sees now is in fact the same one it saw in the past. Thus, even with ambiguous data, the algorithm can reject incorrect 鈥渇eature matching鈥 that would have made the map less rather than more accurate.
Finally, their methods ensure that uncertainty is explicitly addressed. Leonard emphasizes that SLAM may not produce a perfect map. 鈥淚t鈥檚 easy for a vehicle to get fooled by errors in the acoustic information,鈥 he says. 鈥淪o we don鈥檛 want to be overconfident. There鈥檚 a certain inherent uncertainty to the sensor data, and it鈥檚 important to get that uncertainty right. So we鈥檙e not only building the map but also including the right error bounds on it.鈥
A problem of particular interest to Leonard is using AUVs to enable rapid response to accidents and other unforeseen events. For example, one challenge during the April 2010 Deepwater Horizon oil spill was determining whether there was a spreading plume of oil and if so, tracking where it was going. A network of AUVs working together could play a critical role in carrying out such tasks.
To that end, Leonard and his team are developing techniques that will enable AUVs to communicate with one another so they can navigate and collect information cooperatively. 鈥淚f they can share information, they can accumulate data far more quickly than if they work alone,鈥 he says. 鈥淭ogether, they鈥檒l be able to sweep a large area and quickly produce the best possible map so that people can understand what鈥檚 going on and develop and implement an effective response.鈥
This story is republished courtesy of MIT News (), a popular site that covers news about MIT research, innovation and teaching.
More information: Next: Biomimetic pressure sensors help guide oceangoing vessels.
Provided by Massachusetts Institute of Technology