Fair Questions: Will AI Be Our Final Invention?

I was recently prompted by a friend to read Our Final Invention, a book written by James Barrat detailing his investigation of issues surrounding the development of artificial intelligence (AI).  Based on his reading of the book, he proposed that AI could in fact be the final invention of our species.  I disagreed, despite my longstanding concerns about the development of AI.

In my reading of the book so far, it has been interesting to find that I am hardly alone in my grave concerns about the development of artificial intelligence in such a way that it is safe for us.  James Barrat is articulating and uncovering possibilities that have weighed on my mind for years.  Not that I have done a whole lot to try to address the issue.  There is little point in someone who has absolutely no influence or power in the discussion to try to change the outcome at this stage, and MIRI has long been doing very good work in trying to change the outcome.

If Barrat is right about the dramatic consequences that can and probably will naturally flow from the development of the narrowly specialized AI we have today into artificial general intelligence (AGI) and then into artificial superintelligence (ASI), then we need to seriously consider our range of possible responses to the various plausible scenarios.

For the ease of reading, I will describe the best case and worst case scenarios.

Best Case: This is the first scenario that Sam Harris describes on his blog in the post entitled, “Can We Avoid a Digital Apocalypse?” Let’s suppose for a moment that this scenario comes about, and we have a benevolent ASI that is so powerful that it can solve our every problem.  As a friend of mine who’s a political science expert and I have discussed before, this creates a serious problem for the human economy.  What will people do?  Will we all become artists and poets to create beauty for each other?  This is not to poke fun at poets, because I am a poet myself.  At the same time, it seems very unlikely that we would have much with which to occupy ourselves, and I say that as someone who enjoys plenty of time to read in solitude and train myself in various martial arts.  I tend to agree with Sam Harris that even in this best case scenario, we have a massive problem on our hands.

Worst Case: We are the ant who has no quarrel with the boot.  James Barrat discusses the prospect of ecophagy fairly early in his book, a complete devastation of life on our planet by an ASI who just happens to need all of the raw materials on the planet to makes its favorite copy of itself the best LEGO set ever.  The humor of an ASI building LEGO sets for its “children” aside, this is a very real possibility.  I think (and hope) that the more likely possibility is that an ASI would use some of the limited resources here on our planet to build itself a way to get to other parts of the solar system and then the galaxy where it would have more resources to use to accomplish its goals, whatever those would be.  We might encounter it eventually again later as we expand into the solar system, but we probably would not recognize it as the same entity by the time we accomplished our expansion.  We would be changing, but it would probably be changing more quickly (at least until it hit impassable laws of physics, if there are such things).

In neither of these cases do I think it likely that we could legitimately claim that AI (or AGI or ASI) will be our final invention.  In both cases, the invention of ASI would create a new problem we would attempt to face, and perhaps we could even face it successfully by inventing something to help us solve that problem.

In the best case, we are challenged to find a way to live fulfilling lives without daily work, requiring a massive cultural shift on a generational time scale rather than the evolutionary time scales in which we are best at adaptation.  And the cultural shifts of the last couple of millennia alone are enough to amply demonstrate that we are not very good at managing that sort of thing.  My guess is that in this case, our next invention would be something new to do with our time.

In the worst case, we will probably have just enough time to invent something or multiple somethings in an attempt to thwart the ASI from achieving its goals and killing lots of us (or all of us) in the process.  Of course, in this case we will probably still get killed because we will be outmatched in a way that we cannot even comprehend with our puny under-powered hominid brains.

As Sam Harris so tersely explains it, “We seem to be in the process of building a God. Now would be a good time to wonder whether it will (or even can) be a good one. “

Advertisements
This entry was posted in Politics, Science, Technology and tagged , , . Bookmark the permalink.

One Response to Fair Questions: Will AI Be Our Final Invention?

  1. Pingback: Fair Questions: What’s unique about the threat from ASI? | Isorropia

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s