Skip to content

Posts from the ‘Consciousness’ Category

Our Conceit of Consciousness

MIT recently held a symposium called “Brains, Minds, and Machines,” which took stock of the current state of cognitive science and machine learning, as well as debating directions for the next generation of advances. The symposium kicked-off with perspectives from some of the various lions of the field of cognitive science such as Marvin Minsky and Noam Chomsky.

A perspective by Chomsky lays down the gauntlet with regard to today’s competing schools of AI development and directions:

Chomsky derided researchers in machine learning who use purely statistical methods to produce behavior that mimics something in the world, but who don’t try to understand the meaning of that behavior. Chomsky compared such researchers to scientists who might study the dance made by a bee returning to the hive, and who could produce a statistically based simulation of such a dance without attempting to understand why the bee behaved that way. “That’s a notion of [scientific] success that’s very novel. I don’t know of anything like it in the history of science,” said Chomsky.

Of course, Chomsky is tacitly assuming that “meaning” and truly understanding natural language is something much more than just statistics. But is it really? That certainly seems like common sense. On the other hand, if we look closely at the brain, all we see are networks of neurons firing in statistical patterns. The meaning somehow emerges from the statistics.

I suspect what Chomsky really wants is an “explanation engine”–an explanatory facility that can convincingly explain itself to us, presenting to us many layers of richly nuanced reasoning. Patrick Winston at the same symposium said as much:

Winston speculated that the magic ingredient that makes humans unique is our ability to create and understand stories using the faculties that support language: “Once you have stories, you have the kind of creativity that makes the species different to any other.”

This capability has been the goal of AI from the beginning, and 50 years later, the fact that such a capability has still not been delivered has clearly not been for lack of trying. I would argue that this perceived failure is a consequence of the early AI community falling prey to the “conceit of consciousness” that we are all prone to. We humans believe that we are our language-based explanation engine, and we therefore literally tell and convince ourselves that true meaning is solely a product of the conscious, language-based reasoning capacity of our explanation engines.

But that ain’t exactly the way nature did it, as Winston implicitly acknowledges. Inarticulate inferencing and decision making capabilities evolved over the course of billions of years and work quite well, thank you. Only very recently did we humans, apparently uniquely, become endowed with a very powerful explanation engine that provides a rationale (and often a rationalization!) for the decisions already made by our unconscious intelligence—an endowment most probably for the primary purpose of delivering compact communications to others rather than for the purpose of improving our individual decision making.

So to focus first on the explanation engine is getting things exactly backward in trying to develop machine intelligence. To recapitulate evolution, we first need to build intelligent systems that generate good inferences and decisions from large amounts of data, just like we humans continuously and unconsciously do. And like it or not, we can only do so by applying those inscrutable, inarticulate, complex, messy, math-based methods. With this essential foundation in place, we can then (mostly for the sake of our own conceit of consciousness!) build useful explanatory engines on top of the highly intelligent unconsciousness.

So I agree with Chomsky and Winston that now is indeed a fruitful time to build explanation engines—not because AI directions have been misguided for the past decade or two, but rather because we have actually come such a long way with the much maligned data-driven, statistical approach to AI, and because there is a clear path to doing even more wonderful things with this approach. Unlike 50 years ago our systems are beginning to actually have something interesting to say to us; so by all means, let us help them begin to do so!

Advertisements