Skip to content

The Explanation Engine

Recommendation engines are now becoming quite familiar parts of our online life, although we are still in the early stages of the recommendation revolution. The ability to infer your preferences and deliver back to you valuable suggestions of content, products, and even other people is the wave of the future—another manifestation of the dominant IT theme of this decade, “adaptation.”

But what intrigues me even more than the intelligent recommendations themselves is a topic I discuss at length in The Learning Layer: the ability for the system to deliver to you an explanation of why you received a recommendation. You may have already experienced some simple examples of the “people that bought this item also bought these items” variety. But in more sophisticated systems that infer preferences and interests from a great many behavioral cues, there is an opportunity to provide much more detailed and nuanced explanations. Among other subtleties, these increasingly sophisticated explanations can provide you with not just the rationale for making the recommendation, but also any caveats or reservations the system might have, as well as a sense of the system’s degree of confidence in making the recommendation. Providing such a nuanced, human-like explanation is in many ways a more daunting technical achievement than providing the recommendation itself.

Among the many things that intrigue me about explanations for recommendations is how they can shed some light on the peculiarities of how our own brains work. This was brought home to me as we worked on designing explanatory capabilities for our recommender systems. At first we thought this would be pretty trivial—we would just have the system regurgitate the recommendation engine’s rules for recommending the particular item. But here’s the rub—the rules are not always or even typically “if-then” rules of the sort that we humans are going to easily understand. More typically, rating and ranking of potential items to recommend are going to be the product of various high-powered mathematical evaluations and manipulations of vectors and matrices. How can they possibly be compactly conveyed in an explanation to the recommendation recipient? After a while we realized that they really cannot—explanations necessarily have to be an approximation of the actual thought processes of the recommendation engine. A very useful approximation, but an approximation nevertheless. We also realized that to do them right, it was an architectural necessity to have a dedicated engine for explanations complementary to, but separate from, the powerful but inarticulate recommendation engine.

Turns out we are no different—we humans have a language-based explanation engine that explains to others, as well as ourselves, why we do the things we do. And it has become increasingly apparent from psychological studies over the past decade or two that our explanations are really only approximations of our underlying, unconscious decision making. In fact, it has been confirmed by recent brain imaging studies that although we tend to believe that our conscious and logical explanation engine is making the decisions, in reality it is just providing an after-the-fact explanation for a decision already unconsciously made elsewhere in our brains. Moreover, studies involving hypnosis have revealed that our explanation engine just makes up stories as required to provide a rationale for an action if the real motivation is not accessible to the conscious explanation engine!

Now this seems like a pretty weird state of affairs. But taking some lessons from building explanatory facilities for recommender systems suggests that this seemingly strange brain behavior is really the only way it can be. After all, our brain most fundamentally is nothing more than a vast, weighted network of neurons. Decisions are assessed and made by inscrutable interactions among this network. We humans, fairly recently in our evolutionary history, have developed a language-based explanation engine that has essentially been grafted on to this underlying network that enables us to effectively communicate with other humans. To achieve a reasonable compaction, the explanations we give must necessarily typically be extreme simplifications, and quite often will also have some degree of fabrication woven in. And our explanation engine continuously explains itself to us—so that we come to believe that we are our explanation engine, that there is a clear logic to what we do, that we have an explicit freedom to decide as we wish, and a variety of other explanatory conceits.

There is really no other way we could be—it’s an inevitable architectural solution. Turns out the explanation engine has a good deal to say about us!

7 Comments Post a comment
  1. This is my first exploration of this site. Its really good to find so many resources at one place. In this article, an excellent explanation of our decisions and possible explanations to those decisions are given. Very interesting article indeed. How did I land here is the first question and I can explain the process but cannot explain the logic behind.

    While offering advise to my clients,many times I tend to ask questions which are not the standard ones and get responses which pin point the underlying problems. Why I asked these questions I cannot explain. May be there is some other place where this choice happens.

    I am interested in learning about these studies which you have referred to.Is it possible to give references to the “Studies involving hypnosis”?

    Thank you for a thought provoking idea.

    September 19, 2011
    • Santi,
      Thanks for your comments. Some examples of making up stories (confabulations) for decisions made under the influence of hypnosis, as well as in other situations, can be found starting on page 22 of the following pre-print document:

      Click to access Carruthers_preprint.pdf

      Steve

      September 19, 2011
  2. more on the explanation engine and its confabulations: http://discovermagazine.com/2012/brain/22-interpreter-in-your-head-spins-stories

    August 18, 2012

Trackbacks & Pingbacks

  1. The Chinese Room Explanation « The Learning Layer
  2. Watson: Will Zombies Inherit the Earth? « The Learning Layer
  3. Our Conceit of Consciousness « The Learning Layer
  4. Just the Facts Ma’am? | The Learning Layer

Leave a reply to steveflinn Cancel reply