The Future Will Need Us to Reboot It

I’ve been reading a bit about the Technological Singularity. It’s an interesting and chilling idea conceived by people who aren’t testers. It goes like this: the progress of technology is increasing exponentially. Eventually the A.I. technology will exist that will be capable of surpassing human intelligence and increasing its own intelligence. At that point, called the Singularity, the future will not need us… Transhumanity will be born… A new era of evolution will begin.

I think a tester was not involved in this particular project plan. For one thing, we aren’t even able to define intelligence, except as the ability to perform rather narrow and banal tasks super-fast, so how do we get from there to something human-like? It seems to me that the efforts to create machines that will fool humans into believing that they are smart are equivalent to carving a Ferrari out of wax. Sure you could fool someone, but it’s still not a Ferrari. Wishing and believing doesn’t make it a Ferrari.

Because we know how a Ferrari works, it’s easy to understand that a wax Ferrari is very different from a real one. Since we don’t know what intelligence really is, even smart people easily will confuse wax intelligence for real intelligence. In testing terms, however, I have to ask “What are the features of artificial intelligence? How would you test them? How would you know they are reliable? And most importantly, how would you know that human intelligence doesn’t possess secret and subtle features that have not yet been identified?” Being beaten in chess by a chess computer is no evidence that such a computer can help you with your taxes, or advise you on your troubles with girls. Impressive feats of “intelligence” simply do not encompass intelligence in all the forms that we routinely experience it.

The Google Grid

One example is the so-called Google Grid. I saw a video, the other day, called Epic 2014. It’s about the rise of a collection of tools from Google that create an artificial mass intelligence. One of the features of this fantasy is an “algorithm” that automatically writes news stories by cobbling pieces from other news stories. The problem with that idea is that it seems to know nothing about writing. Writing is not merely text manipulation. Writing is not snipping and remixing. Writing requires modeling a world, modeling a reader’s world, conceiving of a communication goal, and finding a solution to achieve that goal. To write is to express a point of view. What the creators of Epic 2014 seemed to be imagining is a system capable of really really bad writing. We already have that. It’s called Racter. It came out years ago. The Google people are thinking of creating a better Racter, essentially. The chilling thing about that is that it will fool a lot of people, whose lives will be a little less rich for it.

I think the only way we can get to an interesting artificial intelligence is to create conditions for certain interesting phenomena of intelligence to emerge and self-organize in some sort of highly connectionist networked soup of neuron-like agents. We won’t know if it really is “human-like”, except perhaps after a long period of testing, but growing it will have to be a delicate and buggy process, for the same reason that complex software development is complex and buggy. Just like Hal in 2001, maybe it’s really smart, or maybe it’s really crazy and tells lies. Call in the testers, please.

(When Hal claimed in the movie that no 9000 series computers had ever made an error, I was ready to reboot him right then.)

No, you say? You will assemble the intelligence out of trillions of identical simple components and let nature and data stimulation build the intelligence automatically? Well, that’s how evolution works, and look how buggy THAT is! Look how long it takes. Look at how narrow the intelligences are that it has created. And if we turn a narrow and simplistic intelligence to the task of redesigning itself, why suppose that it is more likely to do a good job than a terrible job?

Although humans have written programs, no program yet has written a human. There’s a reason for that. Humans are oodles more sophisticated than programs. So, the master program that threatens to take over humanity would require an even more masterful program to debug itself with. But there can’t be one, because THAT program would require a program to debug itself… and so on.

The Complexity Barrier

So, I predict that the singularity will be drowned and defeated by what might be called the Complexity Barrier. The more complex the technology, the more prone to breakdown. In fact much of the “progress” of technology seems to be accompanied by a process of training humans to accept increasingly fragile technology. I predict that we will discover that the amount of energy and resources needed to surmount the complexity barrier will approach infinity.

In the future, technology will be like weather. We will be able to predict it somewhat, but things will go mysteriously wrong on a regular basis. Things fall apart; the CPU will not hold.

Until I see a workable test plan for the Singularity, I can’t take it seriously.