why I’m a crabby patty about AI and cognitive science

http://fredrikdeboer.com/2014/02/21/why-im-a-crabby-patty-about-ai-and-cognitive-science/

I’m sorry if I am a grump about artificial intelligence. It just happens to be a subject on which our media frequently appears both insufficiently educated and unwilling to learn. My frustration stems from a basic category error, which can be boiled down to this:

My cellphone is much better than my cellphone five years ago, ergo artificial intelligence/the Singularity/techno-utopia is right around the corner. 

If that’s an exaggeration, it’s not much of one. Now it happens that this is a generally unhelpful way to think about technology. Technological progress is constant, but it is stunning how unevenly distributed it is. This leads to complaints of the type “they can put a man on the moon but they can’t make a deoderant that lasts past 2 PM.” This crops up in specific fields all the time. There’s been a well-documented problem in personal electronics where battery development has not kept pace with development in processors, leading to lower effective usage time thanks to the increased power requirement of faster processors. But you can extend this observation in all manner of directions, which is why futurism from the past is often so funny.

This kind of thinking is especially unhelpful in the realm of artificial intelligence because it so thoroughly misunderstands the problem. The problem with AI is that we don’t really know what the problem is, or agree with what success would look like. With your cellphone (or any number of similar rapidly-improving technologies) we are perfectly aware of what constitutes success, and we know pretty well how to improve them. With AI, defining the questions remains a major task, and success a major disagreement. That is fundamentally different from issues like increasing processor power, squeezing more pixels onto a screen, or speeding up wireless internet. Failing to see that difference is massively unhelpful.

If people want to reflect meaningfully on this issue, they should start with the central controversy in artificial intelligence: probabilistic vs. cognitive models of intelligence. I happen to have sitting around an outline and research materials for an article I’d like to write about these topics. The Noam Chomsky – Peter Norvig argument got press recently, and I’m glad it did, but I think it’s essential to say: this fundamental argument goes back 50 years, to when Chomsky was first becoming the dominant voice in linguistics and cognitivie science, and engaged in his initial assault on corpus linguistics. And it goes back to an even older and deeper question about what constitutes scientific knowledge. I’d love to write about these issues at great length and with rigorous research, but it would be a major investment of effort and time, so I would want to do it for a publication other than here, and unfortunately, none of the places I pitched it to got back to me. (Which does not surprise me at all, of course.) I hope to someday write it. But let me give you just the basic contours of the problem.

The initial project of artificial intelligence was to create machines capable of substantially approximating human thought. This had advantages in both a pure science standpoint and an engineering standpoint; it was important to know how the human brain actually functions because the purpose of science is to better understand the world, but it was also important because we know that there are a host of tasks that human brains perform far better than any extant machine, and it is therefore in our best interest to learn how human brains think so that we can apply those techniques to the computerized domain. What we need to find out– and what we have made staggeringly little progress in finding out– is how the human brain receives information, how it interprets information, how it stores information, and how it retrieves information. I would consider those minimal tasks for cognitive science, and if the purpose of AI is to approximate human cognitive function, necessary prerequisites for achieving it.

In contrast, you have the Google/Big Data/Bayesian alternative. This is a probablistic model where human cognitive functions are not understood and then replicated in terms of inputs and outputs, but are rather approximated through massive statistical models, usually involving naive Bayesian classifiers. This is the model through which essentially every recommendation engine, translation service, natural language processing, and similar recent technologies works. Whether you think these technologies are successes or failures likely depends on your point of view. I would argue that what Google Translate does is very impressive from a technical standpoint. I would also argue that as far as actually fulfilling its intended function, Google Translate is laughably bad, and all the people who say that you can use it for real-world communication have never actually tried to use it for that function. And there are some very smart people who will tell you it’s not improving. One of the great questions for the decade ahead is whether there is a plateau effect in many of these Bayesian models, at what point exponentially increasing the available data in the systems ceases to result in meaningful improvements. Regardless of your view on this or similar technologies, it’s essential that anyone talking about AI reflect understanding of this divide, what the controversies are regarding it, who the players are, and why they argue the way they argue.

There are many people who are not interested in the old school vision of AI. They think that what we should actually care about is using computers to satisfy useful tasks and that we shouldn’t worry about the way human thinking works or getting computers to model it. That’s a reputable position. I think in its stronger form, it’s essentially declaring defeat in the pursuit of science and its purpose, but there are a lot of dedicated, well-connected, well-respected people who simply want to build useful systems and leave cognitive science to others. (That’s where the money is, for obvious reasons.) But even for those who are task-oriented, there are profound reasons to want to know how the human brain works. Because what some very smart people will tell you is that the fancy Big Data applications that rely on these Bayesian probability models are in fact incredibly crude compared to animal intelligence, and require a tremendous amount of calibration and verification by human beings behind the scenes. Does Amazon really know what you like? Are its product recommendations very helpful? Are they much better today than they were five years ago?

In this wonderful profile, Doug Hofstadter expresses the pessimistic view of AI very well. AI of the old fashioned school has had such little progress because cognitive science has had such little progress. I really don’t think the average person understands just how little we understand about the cognitive process, or just how stuck we are in investigating it. I constantly talk with people who assume that neuroscience is already solving these mysteries. But that’s the dog that hasn’t barked. Neuroscience has given us an incredibly sophisticated picture of the anatomy of the brain. It has done remarkably little to tell us about the cognitive process of the brain. In a very real way, we’re still stuck with the same crude Hebbian associationism that we have been for 50 years. Randy Gallistel (who, in my estimation, is simply the guy when it comes to this discussion) analogizes it to a computer scientist looking at the parts of a computer. The computer scientist knows what the processor does, what the RAM does, what the hard drive does, but only because he knows the computational process. He knows the base-2 processing system of a CPU. He knows how it encodes and decodes information. He knows how the parts work together to make the input-output system work. The brain? We still have almost no idea, and looking at the parts is not working. It’s great that people are doing all of these studies looking at how the brain lights up in an MRI when exposed to different inputs, but the actual understanding that has stemmed from this research is limited.

Now people have a variety of ways to dismiss these issues. For example, there’s the notion of intelligence as an “emergent phenomenon.” That is, we don’t really need to understand the computational system of the brain because intelligence/consciousness/whatever is an “emergent phenomenon” that somehow arises from the process of thinking. I promise: anyone telling you something is an emergent property is trying to distract you. Calling intelligence an emergent property is a way of saying “I don’t really know what’s happening here, and I don’t really know where it’s happening, so I’m going to call it emergent.” It’s a profoundly unscientific argument. Next is the claim that we only need to build very basic AI; once we have a rudimentary AI system, we can tell that system to improve itself, and presto! Singularity achieved! But this is asserted without a clear story of how it would actually work. Computers, for all of the ways in which they can iterate proscribed functions, still rely very heavily on the directives of human programmers. What would the programming look like to tell this rudimentary artificial intelligence to improve itself? If we knew that, we’d already have solved the first problem. And we have no idea how such a system would actually work, or how well. This notion often is expressed with a kind of religious faith that I find disturbing.

C. Elegans is a nematode, a microscopic worm. It’s got something like 300 neurons. We know everything about it. We know everything about its anatomy. We know everything about its genome. We know everything about its neurology. We can perfectly control its environment. And we have no ability to predict its behavior. We simply do not know how  its brain works. But you can’t blame the people studying it; so much of the money and attention is sucked up by probabilistic approaches to cognitive science and artificial intelligence that there is a real lack of manpower and resources for solving a set of questions that are thousands of years old. You and me? We’ve got 80 billion neurons, and we don’t know what they’re really up to.

Now read this post from Matt Yglesias. I just choose it as an indicative example; it’s pretty typical of the ways in which this discussion happens in our media. Does it reflect on any of this controversy and difficulty? It does not. Now maybe Yglesias is perfectly educated on these issues. He’s a bright guy. But there’s no indication that he’s interacting with the actual question of AI as it exists now. He’s just giving the typical “throw some more processing power on it!” And the most important point is– and I’m going to italicize and bold it because it’s so important– the current lack of progress in artificial intelligence is not a problem of insufficient processing power. Talking about progress in artificial intelligence by talking about increasing processor power is simply a non sequitur. If we knew the problems to be solved by more powerful processors, we’d already have solved some of the central questions! It’s so, so frustrating.

I am but a humble applied linguist. I understand most of this on the level of a dedicated amateur, and on a deeper level in some specific applications that I research like latent semantic analysis. I’m not claiming expertise. And I think there is absolutely a way to be a responsible optimist when it comes to artificial intelligence and cognitive science. I am not at all the type to say “computers will never be able to do X.” That’s a bad bet. But many people believe we’re getting close to Data from Star Trek right now, and that’s just so far from the reality. Journalists and writers have actually got to engage with the actual content. Saying “hey technology keeps getting better so skeptics are wrong” only deepens our collective ignorance– and is even more unhelpful in the context of a media that has abandoned any pretense to prudence or care when it comes to technology, a media that is addicted to techno-hype.

OK, so my short version is almost 2000 words. It’s a sickness.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s