Fair Questions: Why would an ASI destroy us intentionally?

Recently I read a Popular Science article that discussed the impacts of artificial intelligence on our society.  The author correctly points out that there are much more immediate threats to us from artificial intelligence than an artificial super intelligence run amok like Ultron from the upcoming Avengers film.  Specifically, the use of artificial intelligence to automate tasks like high-frequency stock trades, loan approvals, or sentence recommendations for convicted criminals could have massive implications for the quality and quantity of human life, and could turn out to be what we tend to think of as racially discriminatory.

These are very real risks, and I suspect that we will face them very soon, sooner than we might be ready to deal with the inevitable harm that will occur from utilizing artificial intelligence for such tasks.  I think the author is right to warn that while we already know how to deal with human failures and we have social and legal structures in place to handle those failures, we do not have equivalent structures for what artificial intelligence is already in use or in development.

Where he and I part ways is in his assessment that we have more to worry about from narrow or weak AI than we do from an artificial super intelligence.  I have explained in some detail why the threat from artificial super intelligence is unique and deserves consideration greater than we have yet given it as a species.  But I think that the author has summed up well my reason for believing that there is a significant difference between the threat from narrow or weak AI and artificial super intelligence.

Just like we have a serious gap between our preparation to deal with human failures and our preparation to deal with the failures of narrow or weak AI, we have a serious gap between our preparation to deal with narrow or weak AI and our preparation to deal with an artificial super intelligence.  We have the capacity to develop social and legal structures to allow humans and narrow or weak AI to function together, but we may be lagging behind in using that capacity.  With time, we can probably learn to live with narrow AI without creating an unmanageable threat to the existence of our species.

It is also true that with time, we can probably learn to live with an artificial super intelligence without it being an existential threat for us.  Because we evolve on evolutionary timescales, this might seem impossible, but perhaps if we start integrating artificial intelligence with our own by using it to augment our intelligence, we might buy some much-needed time in which to accomplish that state of affairs.  And maybe if we can manage that, then we might be able to create structures to better manage our relationship with artificial super intelligence.

Of course, the distinction between us and an artificial super intelligence might be somewhat murky in such a scenario; it might no longer be a relevant distinction at all.  That said, even if we don’t manage to buy ourselves enough time to develop useful structures which can help us to manage that relationship or achieve a useful fusion between artificial intelligence and our native intelligence, why would we worry that an artificial super intelligence we develop might intentionally destroy us?  Would it even make sense for such an intelligence to waste its time destroying us?  Would it not have better things to do with its time?

As I have speculated before, such a being might just leave us on our little blue and green and brown planet while it takes off to do more interesting things.  It’s likely that an artificial super intelligence would not have much to fear from us.  It could probably find a way to survive our measly attempts to confine or kill it without too much trouble, and it would probably know that we are not currently a threat to its existence.  So why would it bother wiping us out?

I’m not sure how this situation would play out for an ASI from an opportunity cost perspective, but I can suggest some possibilities.

  1. It may need resources that we are currently consuming to leave us in the dust on our dinky little planet, and killing us may be the best way to ensure that it has access to those resources in sufficient amounts.  Ant meets boot with which it has no quarrel.
  2. It may rate the possibility of us becoming an existential threat to it as being very low, but decide that since the cost of eliminating that threat before it can possibly materialize is negligible, it is a worthwhile investment.   Ant meets poisonous food.
  3. It may decide that it will cost it more to fight a long-term war of attrition with us that will occur when we decide that we need to be afraid of it than it would cost to just exterminate us completely.  Ant meets flamethrower.

Obviously, I’m having a bit of fun with the, “Does an ant have a quarrel with a boot?” line from the first Avengers film, but I do think it sums up the situation fairly well.  Not that I think these situations will occur in such a way that we will realize it at the time.  My best guess is that we won’t even notice that we have developed artificial super intelligence until after it is done in much the same way that most people didn’t notice that we already have artificial intelligence performing high frequency stock trades.  Our meeting with the boot could be well be upon us before we notice that there is a boot.

 

 

Advertisements
This entry was posted in Economics, Philosophy, Politics, Science, Technology. Bookmark the permalink.

One Response to Fair Questions: Why would an ASI destroy us intentionally?

  1. Pingback: The Limits of Psychology: Artificial Intelligence Edition | Isorropia

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s