nettime's avid reader on Sat, 16 May 2020 11:14:39 +0200 (CEST)


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

<nettime> Dogs Obey Commands Given by Social Robots


spectrum.ieee.org/automaton/robotics/robotics-software/dogs-obey-commands-given-by-social-robots

By Evan Ackerman

One of the things that sets robots apart from intermittently animated
objects like toasters is that humans generally see robots as agents.
That is, when we look at a robot, and especially a social robot (a robot
designed for human interaction), we tend to ascribe some amount of
independent action to them, along with motivation at varying levels of
abstraction. Robots, in other words, have agency in a way that toasters
just don’t.

Agency is something that designers of robots intended for human
interaction can to some extent exploit to make the robots more
effective. But humans aren’t the only species that robots interact with.
At the ACM/IEEE International Conference on Human-Robot Interaction (HRI
2020), researchers at Yale University’s Social Robotics Lab led by Brian
Scassellati presented a paper taking the first step towards determining
whether dogs, which are incredibly good at understanding social
behaviors in humans, see human-ish robots as agents—or more
specifically, whether dogs see robots more like humans (which they
obey), or more like speaker systems (which they don’t).

The background research on dog-robot interaction that forms the basis
for this work is incredibly interesting. The paper is absolutely worth
reading in its entirety, but here are a few nuggets of prior work that
should help you understand how dogs interact with non-human animated
objects:

> Pongrácz et. al tested whether dogs followed commands from their
> guardians with various levels of embodiment. The guardians may be
> present in the same room as the dogs (i.e., 3D condition), or
> interacted with the dogs via live-stream life-size interactive videos
> (i.e., 2D condition), or interacted with the dogs with only their
> voices came out of a loudspeaker (i.e., 0D condition). Dogs followed
> the commands most reliably in the 3D condition. They followed the
> commands least consistently in the 0D condition, and their
> performances were between 3D and 0D condition in the 2D condition.
> 
> Lakatos et. al conducted a study to test how dogs responded to the
> pointing cues given by a PeopleBot with customized arms. The
> PeopleBot either exhibited human-like behaviors or no social
> behaviors, depending on the condition. A dog participant observed the
> robot interacting with the guardian either socially or mechanically
> for six minutes in the interaction phase. The robot then delivered a
> food reward for the dog. In the subsequent testing phase, the robot
> pointed to one of the two buckets with hidden food rewards. In the
> testing phase, dogs performed better in the condition with a social
> robot than with a nonsocial robot. However, no evidence suggested the
> mean performance with the social robot was significantly different
> from 50 percent, which is the chance level in two-choice tasks.
> Therefore, the dogs did not consistently follow the pointing cues
> provided by the social robot, even though dogs in general follow
> human pointing cues well.


To summarize, dogs don’t respond very well to commands from loudspeakers
or video systems, and they also don’t really pay attention when a
mechanical-looking robot points at things, even though dogs understand
what pointing means when humans do it.

Curiously, Aibo (a dog-like robot) tends not to be perceived by real
dogs as a competitor for food, and dogs in general don’t interact with
Aibo in dog-like ways. Dogs often react to Aibo in other ways, but it’s
more like “what the heck is that thing” rather than “that’s a weird
dog,” similarly to how some dogs react to things like vacuums (robot or
otherwise). So if dogs understand on some level that robot dogs aren’t
actually dogs, and don’t interact with robot dogs in dog-like ways,  how
would dogs interact with social robots that are designed to interact
with humans and therefore have some human-like features?

The Yale researchers put together an experiment that compared how dogs
respond to commands given by a Nao to how dogs respond to the same
commands given by a speaker system. A group of 34 dogs participated in
the experiment, and each dog was tested with either the speaker or the
Nao (but not both) in a room that also included a researcher and the
dog’s guardian. After a brief intro to the testing environment, the
robot or speaker called the dog’s name (using the same voice), and the
researchers noted whether the dog paid attention at all. Then the robot
or speaker would talk to the guardian for a bit in an attempt to
“de-novelty” itself, provide a treat to the dog, and then give a “sit”
command, which was the real test.

Results of the experiments showed that the dogs paid significantly more
attention to the robot than the speaker, and were significantly more
likely to follow a sit command from the robot. Dogs obeyed the sit
command over 60 percent of the time when it came from the robot, but
less than 20 percent of the time when it came from the speaker, even if
they did look a bit confused about the whole thing at times.

While these results are certainly interesting, it’s important to
emphasize that the goal here was, according to the researchers, to
“answer the question of whether dogs could respond to a social robot at
all.” The researchers weren’t (yet) trying to determine what factors
might increase or decrease that likelihood, but instead, they were
giving the dogs a sort of ideal opportunity. For example, the dogs’
guardians were instructed to interact with the robot, talking to it and
making eye contact, to help encourage the dogs to see the robot as
friendly and alleviate any potential anxiety while also drawing
attention to the robot.

It’s also not entirely clear exactly what the dogs are responding to.
Lead researcher Meiying Qin highlighted some differences for us between
a robot and a speaker that could have caused the behavioral differences,
including:

* A robot may be perceived as an agent, but not the loudspeaker;

* A robot is embodied, while the speaker is not (the agent who provided
the commands via the loudspeaker is not physically present in the
testing room);

* A robot provided both visual and audio cues, while a speaker provided
only audio cues.

We asked Qin whether she thought it would make a difference if the robot
was more or less humanoid, how much of a face it had, whether it smelled
like anything, and other traits that dogs might associate with
human-ness. “Since dogs are very sensitive to human social cues, the
robot being a humanoid or not may make a difference,” Qin says.
“However, if a non-humanoid robot behaved like an agent (e.g., behaved
like a dog, or exhibit any social behaviors), dogs may also respond in a
social manner.”

She explained that, in terms of whether the robot has eyes or not, or
smells like a person, these factors could also impact how dogs respond
to the robot. But Qin adds that the researchers need further evidence to
give a more affirmative answer. “Whether the robot moves or not could
affect the dogs differently,” she says. “A robot that just stands still
without any movement may not present itself as an agent to the dog, and
the dogs may not respond to such a robot socially. On the other hand, a
robot that moves too much (e.g., the robot walks) or moves too fast will
simply scare the dogs.”

Now that we have evidence that dogs do respond to social robots, the
next step is to figure some of this stuff out. And it’s not just about
making more effective social robots for dogs, of course—the larger
context is that by studying how dogs behave towards social robots
relative to humans, it can help us understand how social robots affect
our own behavior, too.

Full lecture: https://www.youtube.com/watch?v=cMCfQE0G6QQ
Full paper: https://dl.acm.org/doi/abs/10.1145/3371382.3380734




#  distributed via <nettime>: no commercial use without permission
#  <nettime>  is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nettime@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject: