My cousin Guillermo Cassinello Toscano was on the train that derailed in Santiago de Compostela, Spain, last week when it went around a bend at twice the speed limit. Cassinello heard a loud vibration and then a powerful bump and then found himself surrounded by bloody bodies in wagon number nine. Shaking, he escaped the wreckage through either a door or a hole in the train—he cannot recall—then sat amid the smoke and debris next to the track and began to cry. Seventy-nine passengers died. Continue reading
Baking cupcakes can be as much a matter of social interaction as it is a mechanical exercise. Never is this more true than when your kitchen partner is a robot. Their always-right, ego-deflating advice can be off-putting, reports social psychologist Sara Kiesler and her colleagues from Carnegie Mellon University, in Pittsburgh. But having them employ a different type of rhetoric could help soften the blow.
In one study, Kiesler’s former student Cristen Torrey, now at Adobe, observed how expert bakers shared advice with less-experienced volunteers. She recorded the interactions and extracted a few different approaches the experts used. For instance, “likable people equivocate when they are giving help,” Kiesler says. That is, they say things such as “Maybe you can try X” rather than simply “Do X.” They also soften their advice with extraneous words such as “Well, so, you can try X.”
So Torrey filmed a few of her own scenarios in which either robots or people shared advice with actors pretending to learn how to bake, using various combinations of the language the experts used. Then she asked a new group of volunteers to watch the videos and rate how likable, controlling, and competent the experts were. They found that equivocation, or hedging, made the experts appear more competent, less controlling, and more likable. The effect was even stronger for the robots, suggesting that people find robots less threatening than humans when the robots use humanlike language. Kiesler presented some of these results on 4 March at the ACM/IEEE International Conference on Human-Robot Interaction, in Tokyo.
In a crisis control center, several teams of firefighters in Montelibretti, Italy, used laptops to guide a robotic ground vehicle into a smoke-filled highway tunnel. Inside, overturned motorcycles, errant cars, and spilled pallets impeded the robot’s progress. The rover, equipped with a video camera and autonomous navigation software, was capable of crawling through the wreckage unguided while humans monitored the video footage for accident victims. But most of the time the firefighters took manual control once the robot was a few meters into the tunnel. Continue reading
Sentry duty is a tough assignment. Most of the time there’s nothing to see, and when a threat does pop up, it can be hard to spot. In some military studies, humans are shown to detect only 47 percent of visible dangers.
A project run by the Defense Advanced Research Projects Agency (DARPA) suggests that combining the abilities of human sentries with those of machine-vision systems could be a better way to identify danger. It also uses electroencephalography to identify spikes in brain activity that can correspond to subconscious recognition of an object. Continue reading