In a crisis control center, several teams of firefighters in Montelibretti, Italy, used laptops to guide a robotic ground vehicle into a smoke-filled highway tunnel. Inside, overturned motorcycles, errant cars, and spilled pallets impeded the robot’s progress. The rover, equipped with a video camera and autonomous navigation software, was capable of crawling through the wreckage unguided while humans monitored the video footage for accident victims. But most of the time the firefighters took manual control once the robot was a few meters into the tunnel. Although the search was just an experiment, microphones recorded clear signs of stress during several tests of the scenario: The firefighter driving the rover spoke at a higher pitch, and members of some teams also interfered with one another’s radio transmissions. And while the human drivers may have improved the robot’s performance, they should have been more focused on the search for victims, says artificial-intelligence expert Geert-Jan Kruijff of the German Research Center for Artificial Intelligence, in Saarbrücken, who consulted on the experiment. The drivers were micromanaging their robots.
The same thing has already happened in the real world: After the Fukushima nuclear power station’s meltdown in 2011, a human driver refused to use a ground robot’s autonomous navigation and managed to get the rover tangled up in its own network cable. At a disaster scene with lives on the line, human rescuers need to learn to trust their robotic teammates or they’ll have their own meltdowns, says Kruijff.