By Victoria Woollaston for MailOnline
Google's Boston Dynamics released a video last week designed to
show off a smaller, lighter version of its robotic dog called Spot.
But the video received an unexpected backlash after people began complaining that the 'dog' in the clip had been mistreated.
During
the footage, employees are seen kicking Spot to prove how stable the
machine is on its feet, but this has been dubbed 'cruel', 'wrong' and
has even raised concerns about robotic ethics.
Scroll down for video
Google's
Boston Dynamics released a video last week designed to show off a
smaller, lighter version of its robotic dog, dubbed Spot. During the
footage, employees are seen kicking Spot to prove how stable the machine
is on its feet, but this has been dubbed 'cruel', 'wrong' and has
raised concerns about ethics
The four-legged, 160lb (73kg)
robo-pet can run, climb stairs, jog next to its owner and correct its
balance on uneven terrain, and when kicked.
It was built by Google-owned Boston Dynamics and is the 'little brother' of the firm's larger Cujo, or 'big dog'.
Boston
Dynamics has not revealed what Spot will be used for, but its video
showed the robot-animal climbing up and down hills, walking through
offices and, of course, being kicked repeatedly.
Following the
video's release, viewers posted their concerns on Twitter. One user
wrote: 'Kicking a dog, even a robot dog seems wrong.'
Another said: 'Just wrong, kick a robot dog as practice: Google's dog robot looks too real for comfort when getting kicked.'
A third added: 'When I first saw [the] Boston Dynamics video I was very disturbed regarding dog-kicking. I'm not the only one.'
Following
the video's release, viewers posted concerns on Twitter. One user
wrote: 'Kicking a dog, even a robot dog seems wrong.'Andother said:
'Spot the dog has kickstarted a legit ethical debate over robot rights'
While a fourth added: 'Spot the dog has kickstarted a legit ethical debate over robot rights.'
Equally,
comments on MailOnline's original story about Spot included: 'If I
ever run across this guy I'm going to kick HIM...poor doggy,' and
'Where's PETA. Look. He kicked that dog!'
To prove this point, CNN filmed reactions to the video in which many viewers said they were shocked and surprised.
But not everyone was as concerned.
One Twitter user wrote: 'Kicking a dog is wrong; it feels, breathes, and remembers. A robot is a piece of sheet metal.'
The four-legged, 160lb (73kg) robo-pet (picutred walking through
Boston Dynamic's offices) can run, climb stairs, jog next to its owner
and correct its balance on uneven terrain, and when kicked
Others said: 'Are they alive? You people need clarity,' and 'Is it cruel to kick a hammer? A rock? A cardboard box?'
When
questioned about the video, animal rights group PETA said: 'PETA deals
with actual animal abuse every day, so we won't lose sleep over this
incident.
it's far better to kick a four-legged robot than a real dog, most
reasonable people find even the idea of such violence inappropriate, as
the comments show.'
Noel Sharkey from the University of Sheffield, UK, added: 'The only way it's unethical is if the robot could feel pain.'
This is not the first time Boston Dynamics has used to tactic to demonstrate its machines' balancing skills.
In 2008, it released a similar video for its Big Dog in which the larger model was also kicked by employees.
It was also filmed walking through snow and shows how the robot can correct its stance when walking on ice.
Spot and Big Dog are an electrically powered and hydraulically actuated robots that walk, trot and climb steps.
A
sensor on the robots' head helps it navigate over rough terrain - and
to spot when humans, or another robo-dog is nearby by, allowing it to
follow its owner and run in formation.
During the video, the robot is also shown 'going for a walk' with its 'big brother'.
Last
year, a mathematician said robots will never have feelings because
computers can't handle any process that completely integrates
information, so they cannot be conscious and capable of feeling.
Professor
Phil Maguire of the National University of Ireland in Maynooth said
that consciousness cannot be created in a physical machine in finite
time using limited memory.
Using a mathematical framework for
consciousness, developed over the last decade by Giulio Tononi at the
University of Wisconsin-Madison, Professor Maguire concluded that the
ability to integrate information is a key feature of consciousness.
He
believes that integrated information can't be broken down into smaller
components in conscious minds, because the brain contextualises
information.
For example, when we see a red triangle our brains
don't register the shape as a colourless triangle plus a shapeless
coloured area. Instead we see it as a whole - a red triangle, we
understand the 'wider picture'.
This is not the first time Boston Dynamics has used to tactic to
demonstrate its machines' balancing skills. In 2008, it released a
similar video for its Big Dog in which the larger model was also kicked
by employees, filmed walking through snow and shows how the robot can
correct its stance when walking on ice
Spot (left) was built by Google-owned Boston Dynamics and is the
'little brother' of the firm's larger Cujo, or 'big dog' (pictured
right) Boston Dynamics has not revealed what Spot will be used for, its
video showed the robot-animal climbing up and down hills, walking
through offices and, of course, being kicked repeatedly
Therefore,
if consciousness is based on the integration of lots of pieces of
information, computers can’t be conscious and capable of experiencing
emotions like humans.
Professor Phil Maguire of the National
University of Ireland in Maynooth added that consciousness cannot be
created in a physical machine in finite time using limited memory.
‘It
doesn't necessarily mean there is some magic going on in the brain that
involves some forces that can't be explained physically. It is just so
complex that it's beyond our abilities to reverse it and decompose it,’
he said.
His research may mean that while humans may not find love with a robot, they are unlikely to be made their slaves either.
But not everyone was as concerned. Twitter users tweeted: 'Are they
alive? You people need clarity,' and 'Is it cruel to kick a hammer? A
rock?' Spot is pictured roaming with another robot (left) and climbing
steps (right). Animals rights group PETA said it deals with actual
animal abuse so 'won't lose sleep over this incident'
This theory
was put to the test recently in a paper by Dr Anders Sandberg from the
Future of Humanity Institute at Oxford University.
In his 'Ethics
of brain emulations' research, he posed the question: 'In the future
it's possible we will be able to create artificially human brains that
emulate a real human - but what are the ethicalities and moralities of
doing this?.'
In his paper Dr Sandberg considered a future in which AI may be commonplace in so-called 'lesser beings'.
If brain emulation becomes possible we could in theory clone animals to create, for example, virtual laboratory rats.
There
is much opposition to performing scientific experiments on rats and
other animals in the modern day - but Dr Sandberg questions whether
people will have similar objections to experimenting on an animal that
was artificially created.
'In particular, emulations of animals
could be used instead of real animals for experiments in education,
science, medicine or engineering,' Dr Sandberg wrote.
'If it is
cruel to pinch the tail of biological mice, the same cruel impulse is
present in pinching the simulated tail of an emulated mouse.
'Treating emulations well might be like treating dolls well: it might not be morally obligatory but it is compassionate.'
He
also suggested that it might be necessary to perform neuroscience
experiments on 'real' animals in order to make brain emulation a
possibility - which itself could lead to opposition.
Dr Sandberg
likened the scenario to abortion in the modern day and the battle
between people who are pro-choice and those who are pro-life.
If
an emulation was run for just a millisecond of time before being
deactivated, some might argue that this would constitute a 'murder' of
sorts, destroying a life as it had been created.
| Daily Mail Online
Google's Boston Dynamics released a video last week designed to
show off a smaller, lighter version of its robotic dog called Spot.
But the video received an unexpected backlash after people began complaining that the 'dog' in the clip had been mistreated.
During
the footage, employees are seen kicking Spot to prove how stable the
machine is on its feet, but this has been dubbed 'cruel', 'wrong' and
has even raised concerns about robotic ethics.
Scroll down for video
Boston Dynamics released a video last week designed to show off a
smaller, lighter version of its robotic dog, dubbed Spot. During the
footage, employees are seen kicking Spot to prove how stable the machine
is on its feet, but this has been dubbed 'cruel', 'wrong' and has
raised concerns about ethics
The four-legged, 160lb (73kg)
robo-pet can run, climb stairs, jog next to its owner and correct its
balance on uneven terrain, and when kicked.
It was built by Google-owned Boston Dynamics and is the 'little brother' of the firm's larger Cujo, or 'big dog'.
Boston
Dynamics has not revealed what Spot will be used for, but its video
showed the robot-animal climbing up and down hills, walking through
offices and, of course, being kicked repeatedly.
Following the
video's release, viewers posted their concerns on Twitter. One user
wrote: 'Kicking a dog, even a robot dog seems wrong.'
Another said: 'Just wrong, kick a robot dog as practice: Google's dog robot looks too real for comfort when getting kicked.'
A third added: 'When I first saw [the] Boston Dynamics video I was very disturbed regarding dog-kicking. I'm not the only one.'
Following
the video's release, viewers posted concerns on Twitter. One user
wrote: 'Kicking a dog, even a robot dog seems wrong.'Andother said:
'Spot the dog has kickstarted a legit ethical debate over robot rights'
While a fourth added: 'Spot the dog has kickstarted a legit ethical debate over robot rights.'
Equally,
comments on MailOnline's original story about Spot included: 'If I
ever run across this guy I'm going to kick HIM...poor doggy,' and
'Where's PETA. Look. He kicked that dog!'
To prove this point, CNN filmed reactions to the video in which many viewers said they were shocked and surprised.
But not everyone was as concerned.
One Twitter user wrote: 'Kicking a dog is wrong; it feels, breathes, and remembers. A robot is a piece of sheet metal.'
Boston Dynamic's offices) can run, climb stairs, jog next to its owner
and correct its balance on uneven terrain, and when kicked
Others said: 'Are they alive? You people need clarity,' and 'Is it cruel to kick a hammer? A rock? A cardboard box?'
When
questioned about the video, animal rights group PETA said: 'PETA deals
with actual animal abuse every day, so we won't lose sleep over this
incident.
WILL KILLING ROBOTS BE UNETHICAL IN THE FUTURE?
In
a recent paper by Dr Anders Sandberg from the Future of Humanity
Institute at Oxford University, he posed the question: 'In the future
it's possible we will be able to create artificially human brains that
emulate a real human - but what are the ethicalities and moralities of
doing this?.'
In particular, in his 'Ethics of brain emulations'
research, Dr Sandberg considers a future in which AI may be commonplace
in so-called 'lesser beings'.
If brain emulation becomes possible we could in theory clone animals to create, for example, virtual laboratory rats.
There
is much opposition to performing scientific experiments on rats and
other animals in the modern day - but Dr Sandberg questions whether
people will have similar objections to experimenting on an animal that
was artificially created.
Dr Sandberg likened the scenario to
abortion in the modern day and the battle between people who are
pro-choice and those who are pro-life.
If an emulation was run for
just a millisecond of time before being deactivated, some might argue
that this would constitute a 'murder' of sorts, destroying a life as it
had been created.
'But while a recent paper by Dr Anders Sandberg from the Future of Humanity
Institute at Oxford University, he posed the question: 'In the future
it's possible we will be able to create artificially human brains that
emulate a real human - but what are the ethicalities and moralities of
doing this?.'
In particular, in his 'Ethics of brain emulations'
research, Dr Sandberg considers a future in which AI may be commonplace
in so-called 'lesser beings'.
If brain emulation becomes possible we could in theory clone animals to create, for example, virtual laboratory rats.
There
is much opposition to performing scientific experiments on rats and
other animals in the modern day - but Dr Sandberg questions whether
people will have similar objections to experimenting on an animal that
was artificially created.
Dr Sandberg likened the scenario to
abortion in the modern day and the battle between people who are
pro-choice and those who are pro-life.
If an emulation was run for
just a millisecond of time before being deactivated, some might argue
that this would constitute a 'murder' of sorts, destroying a life as it
had been created.
it's far better to kick a four-legged robot than a real dog, most
reasonable people find even the idea of such violence inappropriate, as
the comments show.'
Noel Sharkey from the University of Sheffield, UK, added: 'The only way it's unethical is if the robot could feel pain.'
This is not the first time Boston Dynamics has used to tactic to demonstrate its machines' balancing skills.
In 2008, it released a similar video for its Big Dog in which the larger model was also kicked by employees.
It was also filmed walking through snow and shows how the robot can correct its stance when walking on ice.
Spot and Big Dog are an electrically powered and hydraulically actuated robots that walk, trot and climb steps.
A
sensor on the robots' head helps it navigate over rough terrain - and
to spot when humans, or another robo-dog is nearby by, allowing it to
follow its owner and run in formation.
During the video, the robot is also shown 'going for a walk' with its 'big brother'.
Last
year, a mathematician said robots will never have feelings because
computers can't handle any process that completely integrates
information, so they cannot be conscious and capable of feeling.
Professor
Phil Maguire of the National University of Ireland in Maynooth said
that consciousness cannot be created in a physical machine in finite
time using limited memory.
Using a mathematical framework for
consciousness, developed over the last decade by Giulio Tononi at the
University of Wisconsin-Madison, Professor Maguire concluded that the
ability to integrate information is a key feature of consciousness.
He
believes that integrated information can't be broken down into smaller
components in conscious minds, because the brain contextualises
information.
For example, when we see a red triangle our brains
don't register the shape as a colourless triangle plus a shapeless
coloured area. Instead we see it as a whole - a red triangle, we
understand the 'wider picture'.
This is not the first time Boston Dynamics has used to tactic to
demonstrate its machines' balancing skills. In 2008, it released a
similar video for its Big Dog in which the larger model was also kicked
by employees, filmed walking through snow and shows how the robot can
correct its stance when walking on ice
'little brother' of the firm's larger Cujo, or 'big dog' (pictured
right) Boston Dynamics has not revealed what Spot will be used for, its
video showed the robot-animal climbing up and down hills, walking
through offices and, of course, being kicked repeatedly
Therefore,
if consciousness is based on the integration of lots of pieces of
information, computers can’t be conscious and capable of experiencing
emotions like humans.
Professor Phil Maguire of the National
University of Ireland in Maynooth added that consciousness cannot be
created in a physical machine in finite time using limited memory.
‘It
doesn't necessarily mean there is some magic going on in the brain that
involves some forces that can't be explained physically. It is just so
complex that it's beyond our abilities to reverse it and decompose it,’
he said.
His research may mean that while humans may not find love with a robot, they are unlikely to be made their slaves either.
alive? You people need clarity,' and 'Is it cruel to kick a hammer? A
rock?' Spot is pictured roaming with another robot (left) and climbing
steps (right). Animals rights group PETA said it deals with actual
animal abuse so 'won't lose sleep over this incident'
This theory
was put to the test recently in a paper by Dr Anders Sandberg from the
Future of Humanity Institute at Oxford University.
In his 'Ethics
of brain emulations' research, he posed the question: 'In the future
it's possible we will be able to create artificially human brains that
emulate a real human - but what are the ethicalities and moralities of
doing this?.'
In his paper Dr Sandberg considered a future in which AI may be commonplace in so-called 'lesser beings'.
If brain emulation becomes possible we could in theory clone animals to create, for example, virtual laboratory rats.
There
is much opposition to performing scientific experiments on rats and
other animals in the modern day - but Dr Sandberg questions whether
people will have similar objections to experimenting on an animal that
was artificially created.
'In particular, emulations of animals
could be used instead of real animals for experiments in education,
science, medicine or engineering,' Dr Sandberg wrote.
'If it is
cruel to pinch the tail of biological mice, the same cruel impulse is
present in pinching the simulated tail of an emulated mouse.
'Treating emulations well might be like treating dolls well: it might not be morally obligatory but it is compassionate.'
He
also suggested that it might be necessary to perform neuroscience
experiments on 'real' animals in order to make brain emulation a
possibility - which itself could lead to opposition.
Dr Sandberg
likened the scenario to abortion in the modern day and the battle
between people who are pro-choice and those who are pro-life.
If
an emulation was run for just a millisecond of time before being
deactivated, some might argue that this would constitute a 'murder' of
sorts, destroying a life as it had been created.
| Daily Mail Online
No comments:
Post a Comment