The Machine Revolution and End of Evolution

I was recently directed to read an excellent piece on io9 regarding artificial intelligence and the future.  When I got my first degree I took a senior course on the topic of artificial intelligence.  We learned about how AI was developing, how it was defined, various positions regarding the implications of machine intelligence regarding free will and consciousness, and interacted with it in a couple of ways.  After getting my second degree and working with integrated automated systems and studying the programming techniques used to develop artificial intelligence, I have deepened my understanding of the topic somewhat.

I share a lot of the concerns voiced by Helms at MIRI (as well as Goertzel) about the development of AI well beyond our capacity to control them with software mechanisms. Any AI which can get to the level of being a highly effective companion for humans is going to have the ability to do fairly high level problem solving and rewrite its own programming. Such an AI would almost certainly be able to hack itself, and I think it’s really foolish to assume that it wouldn’t be able to.

And Helms is very right that this sort of problem Asimov’s Laws attempt to address is small potatoes. We’re got much bigger problems on the horizon. We already have narrow AI which can far exceed the abilities of the average human at specific tasks, even tasks we might not expect that are well outside of how we understand computation. We already have more general AIs doing amazing things in managing FMS and in finding patterns in huge databases. We already have AI developing scientific theories from existing data sets without human intervention. We already have AI developing what seem to be mental disorders. I doubt it will be very long at all before we develop a flexible AGI which makes us look like idiot children.

At that point, it gets extremely difficult to predict what an AGI will do. It might find a way to take off and leave us behind on our dinky little planet to go and do more interesting things. We might drive it crazy and prompt it to lash out at us. It might manipulate us into being its slaves.

This is of course the end of biological evolution because at the point I’ve described our creations have surpassed us and can develop far more rapidly in ways which we cannot predict.  As creatures who are in a constant process of gradual evolution, we will probably have difficulty understanding our creations who are in a constant process of rapid revolution.  Arguably, we already do have that difficulty with computers.

Related: How Do We Make Moral Machines?

The machine ethics issue is where I start parting ways from our experts. I do agree that it’s probably not possible to produce a 100% safe AGI.

I don’t think that we should take deontological approaches to machine ethics off the table as an option. Along with virtue ethics and consequentialist ethics, deontological ethics could serve as a workable foundation for machine ethics. Many philosophers make the mistake of trying to make deontological ethics into some sort of perfect logico-mathematical way of arriving at moral decisions. But there is no perfect way of arriving at moral decisions given our cognitive and perceptual limitations, and the same would be true of the AGI we develop, at least initially.

We don’t need the AGI to make perfect moral decisions; we just need to provide it with the basic principles necessary to make decent moral decisions based on the limited information available to it. Of course, at this point, the AGI could come up with an entirely different approach to ethics and tell us to sod off.

From a moral standpoint this is particularly interesting because it is only with great difficulty that we change our habits and cultivate virtue or improve at fulfilling our moral duties or gain a greater understanding of the consequences of our actions.  It will be much easier for an AGI to optimize its moral framework relatively quickly, particularly in a consequentialist sense because it will probably have access to much more of the data needed to improve predictive capacity.

So will the AGI be like some of us and use its ability to rationalize self-centered moral conclusions, or will it perhaps arrive at an altruistic moral code?  I’m not sure whether such a being would try to kill us all out of self-interest or not, but I have a suspicion that we will find out because we are more curious than wise and will create an AGI which will face these problems.  It’s possible that the end of evolution will come about not just because our creations no longer evolve, but because we won’t be around to do any more evolving.

Advertisements
This entry was posted in Philosophy, Science, Technology and tagged , , , , , , . Bookmark the permalink.

3 Responses to The Machine Revolution and End of Evolution

  1. Pingback: Fair Questions: Will AI Be Our Final Invention? | Isorropia

  2. Pingback: Fair Questions: Why would an ASI destroy us intentionally? | Isorropia

  3. Pingback: The Limits of Psychology: Artificial Intelligence Edition | Isorropia

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s