Fair Questions: What’s unique about the threat from ASI?

There are those who understandably wonder what is so special about the prospect of artificial superintelligence that it worries people like Bill Gates, Stephen Hawking, and Elon Musk enough for them to take special note of it as a potential threat.

It’s clearly not that it could kill us all.  That could easily happen without the emergence of an artificial superintelligence.  Even a pretty basic well-designed robot could kill us all if we weren’t careful in designing it.  Nanotechnology without an artificial superintelligence driving it could still accomplish an ecophagy event (which I referred to in a previous post) and destroy our species in the process.  Existing nuclear technologies are also a very real existential threat for humanity.

I would suggest that ASI would be unique as a threat in a few ways.  Some of them are psychological and others are practical.

  1. We would no longer be an apex predator.
  2. We would no longer be in the top 2 most intelligent classes of beings on the planet.
  3. We would not be able to understand it with our puny, under-powered hominid brains.

These things are a massive blow to the human self-conception as supremely competent rational beings.  If we build a machine intelligence with the ability for recursive self-improvement and it surpasses us in every ability (which is the likely outcome), then many of us will probably lose confidence in our ability to control it or stop it from destroying us accidentally or intentionally.  That is a huge psychological disadvantage for us as beings striving to survive and thrive.

And then we come to the likely practical problems.

  1. It will evolve on timescales of seconds (and likely exponentially smaller timescales as those seconds go on) while we are evolving over the course of 10+ year generations at best.
  2. It will develop technologies more potent than ours and will have even less of a reason than we do now to make those technologies in such a way that they do not harm humanity directly or indirectly.
  3. It will be able to overcome the safeguards we devise, and it probably won’t take very long to do so.

As Stephen Hawking has noted, we would probably not be able to evolve fast enough to fight it effectively if it decided to eradicate us because it would be evolving many orders of magnitude more quickly than we are.

Even if we manage to build in an ethical framework to the AGI’s programming (not an easy thing) before we program it for recursive self-improvement and it becomes an ASI, it’s quite likely that it will be able to hack itself.  We hack our own brains with relatively crude strategies to overcome our cognitive errors, and there’s no particular reason to imagine that an ASI would be unable or unmotivated to do the same or better.

Let’s suppose that we create the best physical safeguards before we develop an ASI.  It has no access to the internet (just a local copy of the data from it updated regularly for analytical purposes and brought to it physically), so it can’t easily hack other devices remotely.  Let’s further suppose that it is in a reinforced bunker where jumping an air gap in communication isn’t possible as far as we can determine.  Let’s further suppose that we remove all the air from the chamber surrounding it to really make sure that it can’t hack anything remotely, not even with sound waves.  Let’s further suppose that instead of letting gullible human beings, the most intelligent of whom are tricked fairly easily, communicate directly with the ASI, we utilize narrow AI for that purpose and the narrow AI then goes through a process of cleaning up the communications from the ASI for human consumption to eliminate potential manipulation of the human analysts.

That probably sounds very safe to the average person.  Nonetheless, I would suggest that there are a couple of obvious points of failure.  An ASI could alter its communications to the narrow AI in such a way that the message delivered by the narrow AI to the human analysts is still enough to manipulate them into acting in such a way as to cede an advantage to the ASI.  For example, it could pretend that its cognitive functions were degrading rapidly and the human analysts might feel comfortable taking down some of the safeguards to help rehabilitate it.  Or one of the humans might feel confident that it could no longer ethically hold a sentient and sapient being in a cage, sabotaging that cage so as to free the ASI or taking a copy of the ASI out of the facility.

There are also probably some points of failure I’m not examining here because an ASI will likely be much better at finding them than I am.  There are also other kinds of safeguards I haven’t mentioned (and that’s intentional), but none of them take human weakness completely out of the situation as a potential point of failure.  And human beings are the weakest link in any well-devised security plan.  We could always resolve that problem by eliminating humans from the situation completely, but then why bother building an ASI if we can’t even benefit from it?  It would then become our most impressive useless invention ever.

Advertisements
This entry was posted in Politics, Science, Technology and tagged , , , , . Bookmark the permalink.

2 Responses to Fair Questions: What’s unique about the threat from ASI?

  1. Pingback: Fair Questions: Why would an ASI destroy us intentionally? | Isorropia

  2. Pingback: The Limits of Psychology: Artificial Intelligence Edition | Isorropia

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s