Baking cupcakes can be as much a matter of social interaction as it is a mechanical exercise. Never is this more true than when your kitchen partner is a robot. Their always-right, ego-deflating advice can be off-putting, reports social psychologist Sara Kiesler and her colleagues from Carnegie Mellon University, in Pittsburgh. But having them employ a different type of rhetoric could help soften the blow.
In one study, Kiesler’s former student Cristen Torrey, now at Adobe, observed how expert bakers shared advice with less-experienced volunteers. She recorded the interactions and extracted a few different approaches the experts used. For instance, “likable people equivocate when they are giving help,” Kiesler says. That is, they say things such as “Maybe you can try X” rather than simply “Do X.” They also soften their advice with extraneous words such as “Well, so, you can try X.”
So Torrey filmed a few of her own scenarios in which either robots or people shared advice with actors pretending to learn how to bake, using various combinations of the language the experts used. Then she asked a new group of volunteers to watch the videos and rate how likable, controlling, and competent the experts were. They found that equivocation, or hedging, made the experts appear more competent, less controlling, and more likable. The effect was even stronger for the robots, suggesting that people find robots less threatening than humans when the robots use humanlike language. Kiesler presented some of these results on 4 March at the ACM/IEEE International Conference on Human-Robot Interaction, in Tokyo.
Read the rest of this news story at IEEE Spectrum [html] [pdf]