Skip to content

Archive for

The Architecture of Learning

If we want our systems to automatically learn, how should they be architected? The obvious thing to do is to take a lesson from the one “machine” that we know automatically learns, the brain. And, of course, what we find is that the brain is a connection machine; a vast network of neurons that are inter-connected at synapses. And a closer look reveals that these connections are not just binary in nature, either existing or not, but can take on a range of strength of connection. In other words, the brain can be best represented as a weighted, or “fuzzy,” network. Furthermore, it’s a dynamic fuzzy network in that the behavior of one node (i.e., neuronal “firing”) can cascade throughout the network and interact and integrate with other cascades, forming countless different patterns throughout the network. Out of these patterns (somehow!) emerges our mind, with its wondrous plasticity and ability to so effortlessly learn.


Yes, taking this lesson may seem like, well, a no-brainer, but amazingly the lesson has generally been ignored when it comes to our systems. We’re well over half a century into the information age, but looking inside our organizations, we still find hierarchical system structures predominating (e.g., good old folders). The problem with hierarchy-based structures is that they are inherently brittle. They simply don’t embody enough relationship information to effectively encode learning. They don’t even scale (remember the original Yahoo! site?)—as we have all experienced to our great frustration. There is a reason nature didn’t choose this structure to manage information!

Fortunately there has been a revolution in systems structure over the past decade—the rise of the network paradigm. The internet was, of course, the driver for this revolution, and we now find network-based structures throughout our Web 2.0 world—particularly in the form of social networks. But even these networks are not fuzzy—we are limited to establishing our relationships only in binary terms, yes or no. Sure, we can categorize with lists and groups, but we are still at a loss in representing all of the relationship nuances that range from soul mates to good friends to distant acquaintances. And that makes it difficult to apply sophisticated machine learning techniques that truly add value. What’s the percentage of “recommended for you” suggestions that you receive in your favorite social network system that actually hit the mark, for example?

But the situation is even worse with regard to our organizations’ content. In this land-that-time-forgot, our knowledge remains entombed in the non-fuzzy world of hierarchies, or at best, relational structures. Not only are these systems incapable of learning and adapting, but it is often a struggle to even find what you are looking for.

This sad state of affairs can and must be rectified. All we have to do is take our lesson from the brain, integrate our representations of people (i.e., social networks) with our content, allow the relationships to be fuzzy, and we have something that is architected a whole lot like the brain, i.e., architected for learning. That’s not sufficient—we also need clever algorithms to operate against the structure, to create the necessary dynamics and patterns that deliver the benefits of the learning back to us. But the architecture of learning is the necessary prerequisite, quite doable, and therefore quite inevitable.

Advertisements

The Chinese Room Explanation

My favorite recommendation explanation, a real honest-to-goodness computer-generated recommendation, is the one pictured below. I was amazed when I received it even though I had a hand in designing and developing the system that generated the recommendation—the system had truly become complex and unpredictable enough to spark a delighted surprise in its creators!

The recommended item of content in the screen grab is about AI and the explanation engine whimsically mentions as an aside that it has been pondering the Chinese Room Argument—the explanation engine’s little joke being that, as AI aficionados know, the Chinese Room Argument is philosopher John Searle’s famous thought experiment that suggests computer programs can never really understand anything in the sense that we humans do. The argument is that all computers can ever do, even in theory, is just operate by rote and that computers will therefore always be relegated to the realm of zombies, albeit increasingly clever zombies.

The Chinese room thought experiment goes like this: you are placed in a room and questions in Chinese are transmitted to you. You don’t know Chinese at all—the symbols are just so many squiggles to you. But in the room is a very big book of instructions that enables you to select and arrange a set of response squiggles for every set of question squiggles you might receive. To the Chinese-speaking person outside the room who receives your answers, you appear to fully understand Chinese. Of course, you don’t understand Chinese at all—all you are doing is executing a mind-numbing series of rules and look-ups. So, even though in theory you could pass the test posed by that other famous AI thought experiment, the Turing test, you don’t really understand Chinese. And this seems to imply that since, most fundamentally, all computers really ever do is execute look-ups and shuffle around squiggles in prescribed ways, no matter how complex this shuffling is it can never really amount to truly understanding anything.

That’s all there is to it—the Chinese Room Argument is remarkably simple. But it is infuriatingly slippery with regard to what it actually tells us, if anything. It has been endlessly debated in computer science and philosophy circles for decades now. Some have asserted that it proves computers can really never truly understand anything in the sense that we do, and certainly could not ever be conscious—because, after all, it is intuitively obvious that manipulations of mere symbols, no matter how sophisticated, are not sufficient for true understanding. The counterarguments are many. Perhaps the most popular is the “systems reply”—which posits that yes, it is true that you don’t understand Chinese, but the room as a whole, including you and the book of response instructions, understands Chinese.

All of these counterarguments result from taking Searle’s bait that the Chinese Room Argument somehow proves that there is a limit on what computer programs can do. What seems most clear is that these armchair arguments and counterarguments can’t actually prove anything. In fact, the only thing the Chinese Room really seems to assure us of is that there is no apparent limit on how smart we can make our zombie systems. That is, we can build systems that are impressively clever using brute-force programming approaches—even more clever than us, at least in specific domains. Deep Blue beating Gary Kasparov is a good example of that.

As I have implied previously, understanding, as we humans appreciate it, is a product of our own, unique-in-the-natural-world explanation engine, not our powerful but inarticulate, unconscious, underlying neural network-based zombie system. The Chinese room doesn’t directly address explanation engines, or any capacity for learning, for that matter. There is no capacity in the thought experiment for recursion, self-inception, and for the system self-modification that occurs, for example, during our sleep. These are the capabilities core to our capacity for learning, understanding, creativity, and yes, even whimsy. We will have to search beyond our Chinese room to understand the machine-based art-of-the-possible for them.

And we will surely continue to search. We know we can build arbitrarily intelligent systems; but that is not enough for us to deign attributing true understanding to them. This conceit of our own explanation engines is summed up by the old adage that you only truly understand something if you can teach it to others—that is, only if you can thoroughly explain it. Plainly, for us, true understanding is in the explaining, not just the doing.

The Explanation Engine

Recommendation engines are now becoming quite familiar parts of our online life, although we are still in the early stages of the recommendation revolution. The ability to infer your preferences and deliver back to you valuable suggestions of content, products, and even other people is the wave of the future—another manifestation of the dominant IT theme of this decade, “adaptation.”

But what intrigues me even more than the intelligent recommendations themselves is a topic I discuss at length in The Learning Layer: the ability for the system to deliver to you an explanation of why you received a recommendation. You may have already experienced some simple examples of the “people that bought this item also bought these items” variety. But in more sophisticated systems that infer preferences and interests from a great many behavioral cues, there is an opportunity to provide much more detailed and nuanced explanations. Among other subtleties, these increasingly sophisticated explanations can provide you with not just the rationale for making the recommendation, but also any caveats or reservations the system might have, as well as a sense of the system’s degree of confidence in making the recommendation. Providing such a nuanced, human-like explanation is in many ways a more daunting technical achievement than providing the recommendation itself.

Among the many things that intrigue me about explanations for recommendations is how they can shed some light on the peculiarities of how our own brains work. This was brought home to me as we worked on designing explanatory capabilities for our recommender systems. At first we thought this would be pretty trivial—we would just have the system regurgitate the recommendation engine’s rules for recommending the particular item. But here’s the rub—the rules are not always or even typically “if-then” rules of the sort that we humans are going to easily understand. More typically, rating and ranking of potential items to recommend are going to be the product of various high-powered mathematical evaluations and manipulations of vectors and matrices. How can they possibly be compactly conveyed in an explanation to the recommendation recipient? After a while we realized that they really cannot—explanations necessarily have to be an approximation of the actual thought processes of the recommendation engine. A very useful approximation, but an approximation nevertheless. We also realized that to do them right, it was an architectural necessity to have a dedicated engine for explanations complementary to, but separate from, the powerful but inarticulate recommendation engine.

Turns out we are no different—we humans have a language-based explanation engine that explains to others, as well as ourselves, why we do the things we do. And it has become increasingly apparent from psychological studies over the past decade or two that our explanations are really only approximations of our underlying, unconscious decision making. In fact, it has been confirmed by recent brain imaging studies that although we tend to believe that our conscious and logical explanation engine is making the decisions, in reality it is just providing an after-the-fact explanation for a decision already unconsciously made elsewhere in our brains. Moreover, studies involving hypnosis have revealed that our explanation engine just makes up stories as required to provide a rationale for an action if the real motivation is not accessible to the conscious explanation engine!

Now this seems like a pretty weird state of affairs. But taking some lessons from building explanatory facilities for recommender systems suggests that this seemingly strange brain behavior is really the only way it can be. After all, our brain most fundamentally is nothing more than a vast, weighted network of neurons. Decisions are assessed and made by inscrutable interactions among this network. We humans, fairly recently in our evolutionary history, have developed a language-based explanation engine that has essentially been grafted on to this underlying network that enables us to effectively communicate with other humans. To achieve a reasonable compaction, the explanations we give must necessarily typically be extreme simplifications, and quite often will also have some degree of fabrication woven in. And our explanation engine continuously explains itself to us—so that we come to believe that we are our explanation engine, that there is a clear logic to what we do, that we have an explicit freedom to decide as we wish, and a variety of other explanatory conceits.

There is really no other way we could be—it’s an inevitable architectural solution. Turns out the explanation engine has a good deal to say about us!