The Limits of Psychology: Artificial Intelligence Edition

Recently on BigThink, Steven Pinker proposed a very interesting explanation for the cases of “alpha males” expressing fear or apprehension about the rise of artificial intelligence.  As someone who is often viewed as an alpha male and has repeatedly expressed concerns about artificial intelligence and how poorly equipped we might be to handle it effectively, this video piqued my interest.

Especially because I’ve made the argument before that the existence of artificial superintelligence poses a serious psychological problem for us, I wanted to know what a famous psychologist like Steven Pinker might think of as the psychological genesis of our fears about a machine revolution that would end our evolution.

Apparently, he thinks that we manly men just have this fear because we think, rather egocentrically, that an artificial intelligence would be just like us manly men: aggressive and dominating, power-hungry and drunk with greed for more resources.  A few men might object that this is sexist.  But my question, regardless of any sexist assumptions, is this: is it true?

It seems intuitively plausible that men who are domineering and aggressive might assume that an artificial intelligence would share their traits, because we human beings naturally engage in egocentrically interpreting the psychological motivations of other beings.  So it does make some sense that what Pinker calls an “alpha male” might project his own traits onto an artificial intelligence incorrectly.

What seems odd to me about Pinker’s explanation isn’t that it’s implausible for the men he’s describing, but rather that it’s implausible because most of the men I know who are concerned or fearful of an artificial intelligence wiping out our species are nerdy “beta males” rather than muscle-bound “alpha males”.  The men who seem most concerned about this are very unlikely to have aggressive or domineering personalities, even if they might be high-achieving within their field of study or business.

A psychologist might propose in reply to my point that it would make sense for beta males to fear the domination and aggression of their alpha male antagonists, projecting the psychological motivations of the alpha males onto the artificial intelligence.  But this just seems like obvious motivated reasoning by a psychologist who seems to not understand the limits of psychology as applied to artificial intelligence.

The other problem is that the arguments being made by serious intellectuals about the dangers of artificial intelligence are not based on assumptions that it will be like us, but rather the very strong evidence that we are terrible at being perfect programmers.  And we have a very mixed record when it comes to safeguards in software.  How long did it take to have effective safeguards against data loss on PCs?

Pinker is happy to dismiss the idea that our safeguards will be insufficient with a wave of his hand, but I’ve worked in the computing and technology fields long enough to learn that even very skilled teams of programmers with lots of resources make really big mistakes on a regular basis.  There is no reason to think that the teams of skilled programmers working on artificial intelligence will have no blind spots and magically find a way to safeguard from any major problems.

On the other hand, we have lots of reason to think that blind spots will be consequential in developing artificial superintelligence in a way they would not be in developing a touchscreen interface.  An artificial intelligence, Pinker rightly points out, is unlikely to be malicious.  That lack of an unhealthy psychological motivation just doesn’t matter to how dangerous it would be, as I’ve explained before at length.

A shark doesn’t have to possess malicious psychological motivations to kill me.  Nor does a frightened bear trying to protect its cubs.  Nor does a copperhead snake when I’m intruding on its territory.  That doesn’t change the fact that I might well end up dead in those situations.  And that’s an important limitation of psychology to understand.

Advertisements
This entry was posted in Current Events, Philosophy, Relationships, Science, Technology and tagged , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s