There is no shortage of killer robots in the realms of science fiction: RoboCop, Terminator, Cybermen, the Borg, but these are just dystopian imaginings of some far-flung future, right?
Well,
probably not so distant, to be honest.
We are
living in an age of unprecedented technological advancements, and with robotics
and artificial intelligence taking center stage; we find ourselves having to
ask a question that would have previously sounded a tad ridiculous.
How long until
the robots kill us all?
Robotics has
found its way into our everyday lives and industry, robots help surgeons perform
life-saving operations, robots defuse bombs and robotic vehicles will very soon
drive you to work while you catch up on your social media feed.
Robots (autonomous programmable machines) are improving all the time with
smaller power supplies and finer motor control and more speed and power and
stability, it's hoped that they will soon be put to even greater use.
One company
in America has focused on creating robots that can deal with almost any
obstacle that they encounter. They've got four-legged robots that can climb
mountains or open doors, yes I'm talking about the Boston Dynamics and
their rather disturbing back mirror IRL dog film that went viral recently.
They've also got self-balancing two-legged robots that can do parkour including
mailings and perfectly executed backflips.
Across the
Pacific in South Korea engineers and Hollywood designers have teamed up to
create “Method 2” looking uncannily like a “Jaeger” from Pacific
Rim this 1.6-ton monster stands taller than the height of two me and is
designed to respond to the operators' arms and hand movements. Very cool
engineering, that could be used in disaster relief or in keeping the peace,
speaking of keeping the peace, autonomous machines capable of playing a more
active role in military disputes is already very much in use. “Unmanned Aerial Vehicles”, military
drones have been used in various countries for tactical operations since the early 2000s, in many cases, the drones are remotely piloted but more recently a good deal of programming means that certain tasks require little or no pilot
input, these birds are flying themselves, and since the drones can be loaded
with everything from radar to visible light and infrared cameras even reportedly
weapons that annoying buzzing overhead suddenly seem quite a bit more sinister.
The worry is that if you add weaponry to these more advanced hypermobile robots,
then they start posing quite a danger to mankind. Imagine a future not far
from now, where drones are miniaturized, they're equipped with GPS with thermal
imaging, facial recognition, and a few grams of shaped explosive which might
not sound like much but a swarm of, say just 10 of these drones could be able
to penetrate almost all defenses, evade almost every attack, and deliver a
close-range explosion enough to cleanly and efficiently take out a target it's
not here yet but the technology is already in place and that's seriously scary
stuff.
Okay, I know
what you're going to say that is technically humans using machines to kill people,
which is admittedly bad enough, but the question here is are we at risk from
the machines themselves?
Well, it's a
matter of artificial intelligence and hold on to your seats because this is
going to dial up quickly. We're surrounded by artificial intelligence right now
it's in our cars, it's in our smartphones, it's our home assistance, our
computer games but there's relatively little to fear from these so-called ANI
(Artificial Narrow Intelligences) charged with a specific task like,
beating you at virtual chess or understanding your weird requests the ANI (Artificial
Narrow Intelligence) are programmed to self-improve to find ways of getting
better at their job. What we've not got yet is an Artificial General Intelligence
or AGI that would be an artificial mind that could think just like a
human, vastly more complicated than the narrow intelligence that we have today.
To be able to replicate the kind of lateral thinking and problem-solving that
we can do with our eyes closed and AGI would need a supercomputer more
powerful than anything we've yet developed, but as I said before our technology
is increasing at a rate of knots and that's partially thanks to Moore's law the idea that computing power doubles every 18 months there's probably a limit to
this exponential growth but with researchers already looking for ways to keep
up the pace, we are fast approaching a time when a human-level AGI (Artificial
General Intelligence) will become feasible, and that is when we really
should start to worry because unlike us puny humans with our squishy biological
brains and Artificial General Intelligence has access to their own neural
wiring with general problem-solving skills and computer speed mental agility
experts predict that the AGI will be able to quickly optimize and
upgrade evolving into Artificial Super Intelligence in the blink of an
eye and an ASI would not only be able to think quicker than us it would
also, be able to think better than us. Being able to rewire its own brain could
develop entirely new and more efficient ways of processing information, leaving
us no more able to comprehend it, than the chicken can understand a smartphone.
If that sounds a little far-fetched consider this in 2017 two chatbots were
tasked with negotiating a trade they quickly slipped into a newly created a language that was virtually incomprehensible to their programmers, the trade was
completed successfully but even this supposedly Artificial Narrow intelligence left the world worriedly scratching their heads.
Now, I
haven't yet pointed out that Artificial Super Intelligence could
bring an end to all of the world's problems solving economic crises, climate
change, and diplomatic disputes with one fell swoop, but many
experts think that is much more likely outcome is the complete eradication of
all human life, because while Super Intelligence that evolved from an Artificial
General Intelligence may be smarter than a human they never were a human
and they wouldn't necessarily hold the core human values that keep ours as a
species working together. Working off their original programming, Artificial
Super Intelligence may stop at nothing to achieve their purpose to make all
humans happy well: inject them with Prozac, eliminate all war, then
destroy all warring nations, fix climate change, kill all humans.
More than 70
years ago science fiction writer Isaac Asimov realized the dangers a self-improving intelligent robot would pose to the human race, prompting him to devise his Three
Laws of Robotics, 1 (one) a robot should not harm a human or through
inaction allow a human to come to harm, 2 (two) a robot must obey the
orders were given to it by humans, and 3 (three) a robot must protect its own
existence, well it may sound foolproof out in the real world it quickly becomes
clear that morals cannot be a one-size-fits-all, in a car crash who do you save
the driver or the pedestrian, if weaponized micro drones are given orders to
seek and destroy whose orders are to be trusted and given almost the inevitability of a superintelligence arising from the first general
intelligence, programmers have only got one chance to get it right this is a
real and pressing danger and one which has prompted over a hundred robotics and
AI pioneers including futurist Elon Musk to call on the UN
to ban the development of autonomous weapons and put the brakes on this exponential
advancement until we have a chance to figure out this murky moral minefield.
It may be
too late if humans can be relied on for anything it's ignoring the rules and
bans when they get in the way of their own priorities, when has there ever been
a time that all seven also billion of us have agreed on everything.
So, when can
we expect the rise of the machines to pose a real existential threat, how long
have we got, well-basing estimates on the current accelerating rate of tech
improvement the time until an Artificial Super Intelligent arises could
be as little as 20 years according to some experts or more than a hundred
according to others, we'd best make sure we always build an off switch.
Do you think
that we're going to meet our fate at the metallic manipulator arms of
hyper-intelligent robot overlords? How long do you think we've got?
I would love
to hear your thoughts please do put them in the comments below:
0 Comments
Be helpful in the comments below: