Skip to content

Archive for

Self-Inception

I recently saw the cool new movie Inception—where the term “inception” means the implanting of an idea into the brain of a target by way of hacking into the target’s dreams. And as those of you who have already seen the movie know, the plot plays with this idea in a recursive way—the dream hacking is conducted in dreams within dreams, making for a mind bending movie experience, and, of course, sufficient ambiguity between dreaming and reality to allow for many Hollywood sequel directions . . .

Watching the movie was a particularly enthralling experience for me because two key themes flowing through The Learning Layer are dreams and recursion (and, ok, because I’m a bit of geek I suppose). Yeah, The Learning Layer is most fundamentally a book about next generation organizational learning, but it’s also a book of many layers, and the undercurrents of dreams and recursion are never far away.

Why the undercurrent of dreams? Well, it has become increasingly clear that the purpose of dreaming is that it is a process for rewiring the network of the brain. The strengthening and weakening of connections in a network is the essence of learning, and dreaming, particularly during the rapid eye movement (REM) stage of sleep, appears to be when meaning is made of the raw information we have taken in during the day through a process of appropriately editing the wiring in our brains. Which may be why, not just humans, nor even just mammals, but every organism with a brain that has been closely examined seems to need sleep. So in The Learning Layer I make the case that if we want our systems to learn effectively they better have the capacity for the equivalent of dreaming.

Why the undercurrent of recursion? Because it’s what invariably lies behind new, unpredictable, emergent phenomena. Recursion is the idea of a feedback loop operating on basic units (e.g., network subsets), and it can give rise to something of a different nature than the individual units on which it operates. Not surprisingly, it’s a fundamental property of the brain, a property that leads to that peculiar little phenomenon emerging from our network of neurons, our mind. And so if we want truly emergent qualities to spring forth from our systems, recursion better be at their core.

In other words, just like our brains, we need our systems to be architected to embrace recursion, and also to dream! And what makes our mind even more powerful and yet so wonderfully unpredictable is the capacity for self-inception. That is, myriad feedback loops can be brought to bear in the vast network that is our brain whereby one part of the brain can modify another part of the brain. This can happen unconsciously (e.g., during our sleep) and/or consciously (i.e., self-inception). In fact, as in the movie, the boundaries between dreaming and inception can become fuzzy, and I relate in the book a simple experiment you can perform on yourself to illustrate this:

You can easily see that feedback dynamic at work whenever you awaken from a dream. If you don’t try to remember the dream, you will almost always forget it. But if you will yourself to recall more details of it, and then remember it, you can make the memory of the dream consolidate, potentially forever. When you do so, one part of your brain literally physically affects another part of your brain, which in turn may later influence other parts of your brain, and so on, for the rest of your life. Your mind will never by quite the same because of that little whim to remember that particular dream!

It’s really quite amazing if you think about it (yep, another opportunity for self-inception!). But maybe even more amazing is that we can actually architect our systems to do the same thing. That’s the essence of the learning layer concept—we provide our systems with the ability to recursively modify themselves based on their experiences with us, which amounts to modifying the connections among the representations of us and our content. And that, of course, is the analog of dreaming. Or maybe it’s not just dreaming, maybe it could be considered self-inception. I suppose that’s just a question of whether that system self-modification is performed consciously or unconsciously . . .  🙂

Advertisements

Engineering Serendipity

Serendipity, the notion of unintentional but fortuitous discoveries, has become a trendy concept of late, and particularly with regard to system platforms that can help foster it for us. For example, John Hagel’s recent book, The Power of Pull, addresses this idea in some detail, and very recently Eric Schmidt, Google’s CEO mentioned electronically calculating serendipity in the context of an interview on Google’s technology directions.

Although just about everyone would agree that some of our systems already serve to bring to our attention unanticipated but useful nuggets of information and people of which we would otherwise remain unaware, the topic has also generated some controversy—perhaps partly because of a potential over-selling of the degree to which today’s systems can truly facilitate useful serendipity, and partly because of a nagging worry that serendipity that is “engineered” may be engineered to serve the hidden purposes of the engineer.   

I discuss some approaches in The Learning Layer that address both of these issues. Engineering serendipity really amounts to delivering automatically generated recommendations of items of content or people. To do this well, the inference engine that generates a recommendation for you has to strike the right balance that lies between the extremes of delivering stuff you already know about and stuff that is so far afield it is very unlikely to have any useful serendipitous effect. And the degree to which that balance can be effectively struck is a function of the behavioral information the inference engine has to work with and its ability to wring the best possible inferential insights from that information. 

Progress on more sophisticated inferential insights is really where all the AI action is these days, and it is inevitable that these capabilities will become ever more powerful, and the resulting recommendations ever more prescient and helpful. But that then amplifies the second concern—how do we know the basis for the recommendations we receive? And even more to the point, how do we know we aren’t being manipulated? 

A solution to this issue that I discuss at length in the book is that recommendations, in any of their various forms (e.g., an advertisement is best thought of as a recommendation), should always be coupled with an explanation of why the recommendation was delivered to you. A good, detailed explanation will inevitably provide you with even more insights that can spur a useful serendipity. And an informative explanation can do even more—it can create a greater degree of trust in that system-based engineer of serendipity. Call it engineering serendipity with transparency. We would want nothing less from a friend doing the recommending, so why not demand the same from our systems?  

The Learning Layer as a Phenomenon

I am not one given to mysticism, but one of the key points I try to make in the book is that the learning layer is somehow more than just a computer-based system, and more than just a different way of working together. Although it is both of these things, it seems to be more than just both. It has a distinctive nature—different from what has come before it that is born of its capacity for social awareness and its co-evolution with us. It learns and changes as we do.

I call it a “phenomenon” rather than some other label because although the learning layer has a unique nature, “phenomenon” connotes a fuzziness of boundary that is inherent to the learning layer. Where does it end and the non-learning layer begin? Is it just the system part or does it comprise both the system and the people with whom it is coevolving? Yes–and not because of any desire to couch it in mysticism, but because the reality is that emergent phenomena invariably defy a precise dissection by our intellectual scalpels.

As with anything qualitatively different, it certainly has precursor concepts, and the boundary between it and its precursors are likewise fuzzy. John Hagel and his co-authors discuss “pull systems” in their delightful new book, The Power of Pull. And the learning layer is surely a pull system. Andrew McAfee coins the category, emergent social software platforms, in his book, Enterprise 2.0, and the learning layer surely also fits within this definition. But the learning layer seems to go a bit further—and yet it is hard to pin down exactly at what point it goes beyond. Perhaps like some other kinds of phenomena, you just know it when you see it . . .