Bleeding Hearts and Artificial Intelligence

The now publicized Heartbleed vulnerability in the OpenSSL software has been forcing various organizations to patch a weakness they have been living with for some time.  I’m forced to wonder how long we can continue the lackadaisical “panic and patch” approach to dealing with software security vulnerabilities.  Especially in light of the apparent potency of today’s hackers, what might we do when faced with the reality that multifactor authentication can be overcome so simply?

As our software grows in size and complexity and interconnectedness with other software, we must realize that it is likely that we are accidentally creating more ways to break the software even as we fix bugs.  What programmer has not had the experience of fixing a bug in a program only to have it cause a new problem?  Even with years of experience, it is simply not possible to anticipate every way in which someone might attack your software.  The available tools for attacking the software will improve over time.  The ways in which your software communicates with other software will change over time.  The operating system in which it runs will change over time.  Even if a programmer manages to create perfect bug-free software, it will not remain free of vulnerabilities.

What’s worse is that there is no way to keep up with all the changes without lots of time and labor if we use the traditional “panic and patch” approach to dealing with software problems.  We need a more proactive approach to identifying problems and patching them.  My suggestion is that we use narrow AI to constantly attack our systems, find vulnerabilities, and then either patch them or notify an administrator of the problem so that it can be patched.  We don’t necessarily want to have an AI constantly attacking a live system, so a cloned drive on a different server might be needed.  That probably sounds expensive, but so are major data breaches, and minor ones can be incredibly costly as well if the information is very important.

I would bet that there are already some companies doing this, but too many are way behind the curve and are setting themselves up for major data losses.  And companies with a lot of financial clout are not the only ones who can use AI to test the vulnerabilities of their systems.  Hackers are perfectly capable of writing an AI to do the same, and with the increasing availability of information on AI programming techniques, this will become even easier to accomplish as time goes on and code libraries for such AI are built.

I was watching the Captain America: The Winter Soldier, and I noticed that the encrypted portable drive was apparently protected by a narrow AI which actively fought decryption attempts.  I realize that this is a Hollywood summer blockbuster film from Marvel and fanciful technology is often on display in such films, but we really need to move it out of the realm of the fanciful and into the realm of everyday reality.

This entry was posted in Philosophy, Science, Technology and tagged , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s