Header 1

Our future, our universe, and other weighty topics


Monday, November 3, 2014

Why Machines Will Not Soon Be as Intelligent as Men

The distinguished British newspaper Financial Times had a very dubious story yesterday. It had a headline saying, “Computers will soon become more intelligent than us.” Throughout the story, this extremely doubtful assertion was treated as fact. Various distinguished persons were quoted, and they made some overheated assertions. Elon Musk was quoted as saying that supercomputers may be more dangerous than nukes. Someone named Peter Thiel said, “People are spending way too much time thinking about climate change, way too little thinking about AI,” as if artificial intelligence was a much bigger threat than global warming.

Financial Times has apparently drunk the Singularity kool-aid. The Singularity is the notion that there will soon be an “intelligence explosion,” which will cause computers and robots to become far more intelligent than us. Supporters of this idea are fond of showing graphs showing exponential increases in computer memory or speed. Such people are fond of suggesting that this means there will be some exponential increase in machine intelligence similar to the increase shown in such graphs.

Let me give three strong reasons for thinking that computers will not soon have anything like a general intelligence such as humans have.

Reason #1: Developing a computer as intelligent as a human would require a long period of exponential progress in software, but software has always progressed at a slow, gradual rate, never at an exponential rate.

There are two main parts of a computer: the hardware and the software. In order for you to have anything like a computer as intelligent as a human, you would have to have computer hardware millions of times better than what we have now. You would also have to have computer software millions of times better than what we have now.

Progress in hardware has often occurred exponentially, as in the case of Moore's law. This is the law that about every 18 months or two years, the speed or memory capacity of computers doubles. Since you only have to double something 25 times to make it a million times greater, there is some hope that in this century we will have computer hardware that would be as fast as it would need to be to surpass human intelligence.

But when it comes to software, we have a totally different situation. Software development does not progress at anything like an exponential rate. It progresses at a much slower linear rate. Two years from now the world's software won't be twice as good as it is now. We will be lucky if it is 25% better, because software only gets perhaps 10% better from one year to the next.

What is the difference between growth at something like an annual 10% linear rate, and growth at an exponential “double every two years” rate? It's a world of difference. If hardware speed doubles every two years, it will increase a million times in only 50 years. But if software gets better by only 10 percent per year, it will take it 100 years to get about 6000 times better. To have computers as intelligent as a human, you need computers with software not just a few thousand times better than current computers, but millions of times better.

In short, while progress in computer hardware may be telling us, “The Singularity is coming soon,” progress in software development is telling us the opposite message, basically: “This will take centuries at least.” You can't overcome this objection by asserting that computers will write their own software. While computers are good at cranking out software code for little specific problems, tackling extremely high-level problems (such as creating software for general intelligence) requires a mind as smart as a human to work on it.  Dumb computers will not be able to write software that results in smart computers.

The following graph illustrates the point about linear growth versus exponential growth. If we assume that software capabilities will increase by 30% every two years (an optimistic assumption based on previous trends), then we have a growth in software capabilities over the next forty years which doesn't even register on a graph when graphed against the growth in hardware capabilities.


Reason #2: Software intelligibility problems will act as a gigantic “speed bump” or brake that will tend to slow the development of software needed for computer intelligence.

In order to understand the software intelligibility problem, let us imagine a newly hired computer programmer named John who starts to work today on some very complicated computer system.

John: In my first week here I've been looking through the code, as you told me to do. Wow, that's a lot to try to understand. 100,000 lines of dense Java code.

Will: It's much worse. That code you were looking at was only the “core kernel” of our hugely complicated system. To understand the complete system, you have to also understand the 30,000 lines of web interface code, the 80,000 lines of database stored procedures, and the 50,000 lines of Perl batch processing code.

John: Wow. So is there anyone here who has ever read through all that stuff, to get a complete understanding of what is going on with this system?

Will: Hell no. That would take forever, and give you the world's biggest headache. Nobody here really “understands” the system. We just know enough about it to tweak it here and there.

I know for sure that a situation very similar to this exists today in many large software projects. We can call it a software intelligibility problem. As computer software gets more and more complex, and more and more lines of code are added (often written in hard-to-read computer languages), it becomes harder and harder for humans to understand what the code is doing.

What will happen as software progresses more and more, as the total number of lines of programming code in the world increases by ten times, a hundred times, a thousand times? The software intelligibility problem will get much worse. As we try to move from one hundredth of the way to one tenth of the way to creating the software needed to create computers as intelligent as humans, we will tend to find that we are ending up with software systems that totally surpass our capacity to understand them. This will act as a gigantic brake that will tend to slow further progress.

What happens when humans don't understand a large software system they are working on? Various nasty things like this: programmers make “blind changes” without understanding the ramifications; more and more subtle bugs get introduced into a system; it becomes impossible to re-factor or re-design a system; huge systems end up slowly evolving into nightmarish dead ends that no one wants to work on; you get “fragile systems” that easily break when someone tries to improve them; you see “code degradation” in which a body of programming code ends up getting worse rather than better; you may end up with $100 million dollar project failures. Such things often end up resulting in “negative progress.”

Rather than assuming that we will keep getting a software development progress rate of 10% or 15% per year, it is entirely possible that we will “hit the wall” fifteen or twenty years in the future, as we find that our software systems have grown out of control, morphing into enigmatic sprawling systems that are like mysterious black boxes that we no longer understand. At such a point we might not even be able to count on a 10% yearly progress rate in software development (itself totally inadequate to lead to machines as smart as humans in 100 years).

Reason #3: We have very little realistic hope in this century of being able to use the human brain as a shortcut to reach artificial intelligence equaling human intelligence.

When faced with objections such as these, singularity enthusiasts fall back to the idea that we will be able to get the software needed for artificial intelligence by studying the human brain. The idea is that we will somehow scan the human brain to learn the exact secrets of how it works, and then use those secrets as a shortcut to create intelligent machines.

While such an approach cannot be ruled out as a kind of “thousand year project,” there is no reason for being hopeful that within a century any such approach will work in terms of actually resulting in the software needed to make a computer as intelligent as humans. To understand what a “pipe dream” such a hope seems to be, we must take a step backward and look at what a mystery human consciousness is.

We understand various things about the human brain and its components, and we understand something about how readings of brain activity (such as MRI readings) differ during different types of brain activity. But we basically have no understanding of the essential matter of how it is that brains are able to achieve the “miracle of consciousness.” Brains are physical things, and thoughts are mental things. We have no real understanding of how the first thing leads to the second, any more than we understand how the universe arose from nothingness at the time of the Big Bang. Nor do we have any real understanding of how the brain is able to store concepts or memories.

Moreover, it is quite possible that we will never understand such a thing. It is hard to imagine that we will in the future examine some brain cells at a higher magnification and one day say, “Aha, there is a thought being produced,” or something like: “Yes, I see it now; there is a memory being stored.”

As discussed here and here, there are numerous unexplained anomalies involving the operations of brains and consciousness (such as the fact that some people can function well despite having large parts of their brains destroyed or removed). So it is all too plausible that human consciousness requires some unknown factor (call it an X factor) that involves something more than just neurons and brain electrical activity. In such a case, all attempts to build machine intelligence from studying the human brain will probably be futile.

Even if one rejects such an idea, and sticks with a materialist idea of consciousness as merely the result of millions of years of blind evolution, there still would be no reason to hope that brain secrets millions of years in the making could soon be unraveled in a way that would lead to the development of machine intelligence. One reason why: there is no reason for thinking there is anything like some “thought code” or “neuron alphabet” that can be unraveled in the same way that we unraveled the genetic code. There is no reason to think that there is some simple “brain code” like the nice simple genetic code that underlies life (a code simple enough to be printed on a single page). Having no “thought code” or “neuron alphabet” that we can read and translate into computer code, there is little reason to hope that studying the brain will lead us to get the software needed for machine intelligence.

Even 65 years after discovering the genetic code and DNA, we are still trying to figure out the meaning of much of the information stored in our genes. If we were to discover a “thought code” or “neuron alphabet” in the brain, it would probably take us fifty years or more to unravel it. But we haven't even discovered such a code, and have no particular reason to think that it even exists.

Conclusion

In sharp contrast to the Financial Times' assertion that “Computers will soon become more intelligent than us,” there is little reason to believe that within the next fifty years we will have anything like a computer with a general intelligence and understanding rivaling that of an intelligent human. Computers will get better and better at information processing, and will be able to perform all kinds of astonishing new tricks in the years ahead. Computers will get better and better at certain specialized skills, and will outperform humans at an increasing number of specific tasks. We will see many astonishing new chatbots that mindlessly imitate human intelligence with increasing skill. But there is very little reason to think that within fifty years we will have any computer with the general intelligence and understanding of a human. It is also all too possible that no computer will ever have the general intelligence and understanding of a human, because such intelligence may require a mysterious spiritual “X factor” that can never be reproduced through silicon and electronics.

No comments:

Post a Comment