Skip to content

Posts from the ‘Adaptive’ Category

Enterprise Personalized

The rise of machine learning-based personalized discovery features over the past few years is one of the biggest stories of IT. The statistics are truly staggering. For example, even as of several years ago, over 30% of Amazon’s sales were reportedly due to their personalized recommendations—the figure is no doubt even higher now. LinkedIn has reported that fully 50% of their users’ connections, group memberships, and job applications are driven by their personalized discovery features. And 75% of what people watch via Netflix is due to personalized recommendations. In addition, of course, targeted advertisements can be considered a form of personalized recommendations, as can personalized search, both of which have largely replaced their non-personalized precursors.

So in the consumer world automatic personalization has become an indispensable feature for users and a competitive imperative for providers. What about in the enterprise? Not so much—until now, that is. I wrote The Learning Layer to lay out the path toward making adaptive, personalized discovery a core feature of enterprise IT, and we at ManyWorlds are excited that our Synxi-brand technology is now making that vision a reality!

We are delivering adaptive discovery apps for the major social platforms that continuously learn from users’ experiences and apply this learning to provide users with real-time, personalized recommendations of knowledge and expertise, and which are sensitive to the context of their current activities. Even better, we also have connectors among these apps that extend a layer of learning across platforms. That means users can receive cross-contextualized and personalized recommendations of knowledge and expertise from one platform (e.g., SharePoint) based on what they are doing in another platform. And finally, we have booster products for enterprise search that enable search results to be personalized and/or additional personalized content to be recommended based on the context of a specific search result. That provides users, for the very first time, an enterprise search experience that tops the search experience internet search providers can deliver.

These learning layer technologies are collectively leading toward enterprises becoming truly personalized. And an enterprise personalized is an enterprise that is more productive, as well as being an enterprise that is more compelling to be a part of and to work with.

Just the Facts Ma’am?

Now that Siri has a bona fide competitor, Google Voice Search, a bit of a kerfuffle has emerged with regard to personalities or lack thereof of these assistants. While Siri strives to project some personality by being conversational and peppering her responses with a bit of whimsy, Google Voice Search is all about just giving us the facts. Each approach has advantages and its vocal adherents. And as the systems’ capabilities leap-frog one another with each new version, the latest incarnation of Google Voice Search seems to have gained some speed and effectiveness advantages versus the current incarnation of Siri. Of course, both of these incarnations promise to be fleeting given the pace of the respective development cycles.

Although Google labels their product “search,” the functionality has clearly already morphed more generally into a recommender—i.e., providing suggestions given a context of various of kinds. This trend is a reflection of a generalization noted in The Learning Layer—plain old search is really best considered just a recommendation in which the context is of a particular type, i.e., a search term provided by the user. The inevitable next step in general-purpose recommender technology is delivering “meta-recommendations”—that is, explanations as to why the recommendation was provided, particularly when an explanation is specifically asked for by the recommendation recipient. A capacity for a limited degree of explanatory capability has already been incorporated into the Apple and Google gals to some degree.

Then comes the really interesting advance—making the recommendations and even the explanations adaptive to the user.  That is, learning from her experiences with us to adapt her recommendations and explanations accordingly. Which is followed by one more short step in which aspects of her overall personality become adaptable to us and our particular circumstances as well. A little humor when called for, a bit of sympathy at other times; and all the while learning as to what works best and when, and tuning accordingly. I’ve got a feeling that at that point, which at the current pace of innovation, is not far away, always just providing the facts will be perceived to be somewhat stilted behavior—coming off like a cheesy movie version of AI of the 1960s.

So my guess is that there are times when, indeed, we are all Joe Friday’s, but more often than not we’ll welcome more than just the facts.

Learning from Netflix

Netflix recently published a blog that lays out some of their experiences with their recommender system.  The blog is notable in that Netflix was one of the pioneers of e-commerce recommendation engines, has one of the most famous recommendation engines, and packs a lot of details and good insights into the blog.

Here are a few takeaways:

75% of what people watch via Netflix is due to recommendations.  And given how impressively recommendations drive sales for businesses such as Amazon, it is not surprising that sophisticated recommender systems are becoming the norm in e-commerce.

“Everything is a Recommendation.”  Netflix uses this phrase to underscore the point that most of its interface now personalizes to the user. This approach is an inevitable direction for user interfaces most generally since it is clearly technically feasible and delivers business results.

Optimize for accuracy and serendipity.  People are complex and have diverse tastes, moods, etc.  A good recommender will try to help recommendation recipients keep from just re-paving their own cow paths.

Diversity of behavior types trump incremental algorithm advances.  Given already advanced algorithms, recommender improvement comes grudgingly if based on limited behavioral types (e.g., just course-grained ratings). Deriving inferences from a greater diversity of behaviors, as well as contextual cues, delivers greater advantages.

Explanations of recommendations are critically important.  Recommendations that are accompanied by explanations as to why the recommendation was delivered to the recipient are perceived to have greater levels of authority and credibility, and promote trust.

These are important takeaways for e-commerce, but they are just as applicable to enterprise adaptive discovery systems. In fact, as I discuss in The Learning Layer, because there are often more behavioral types, as well as topical structures, with which to work in the enterprise environment, adaptive systems in the enterprise have some inherent advantages versus those in purely consumer-facing environments.

I am often asked by executives about the value proposition of adaptive discovery or learning layer systems in the enterprise. A moment’s reflection on the ability to deliver the right knowledge and expertise to the right person at the right time generally suffices. But from a macro-economic standpoint, simply observing how recommendation systems have transformed e-commerce also should make the point.  On-line businesses without world-class personalized discovery capabilities simply cannot hope to compete going forward. The same lesson surely applies to systems within the enterprise.

The Era of Adaptive Education

No one seems quite happy with the current state of education. It’s too rote. Too much just teaching to the test. The system fails both the least and most capable students. Teachers are increasingly stressed. Etc. And we shouldn’t look to technology to solve all of this. I get that. However . . . two recent announcements surely signal the beginning of the end of education as we have known it.

First, there was the announcement of Knewton’s system for adapting digital textbooks and materials to the student based on continuous assessments of progress. Then there was the Apple iBook announcement. I know, I know, there are legitimate concerns about Apple’s “walled garden” approach and potential lock-in to their brand of educational process. Nevertheless, there is no going back. These two announcements, the first by a high profile start-up in conjunction with the top text book company, and the second by the world’s top tech company, coming within just a few months of one another, usher in a new era education—one in which all of the education process will adapt to the specific needs of the student. In other words, the end of Zombie education processes!

Exactly how this will play out among the various competitors and complementors in this space is hard to predict, but the train has certainly left the station. The most basic adaptive approaches will be based on personalizing the instruction in response to explicit behavioral information such as test results. However, even more nuanced approaches are inevitable, with the adaptation and personalization being based on more subtle cues from the student, and/or from peers whom are inferred to be in some way similar to the student. And it seems obvious that very shortly Siri-type natural language-based interactive capabilities will be integrated with these learning platforms.

This will have profound implications for learning, as well as for the teaching profession and its administration. It can be expected that the technology will lead, and the necessary adaptation of formal education policies and processes will tend to lag, but inexpensive, adaptive learning tools are destined to rapidly and dramatically reduce the barriers to high quality education (just as Khan Academy videos have already begun to do). Within five years a personal tutor will be available 24×7, and what we have known as education promises to blur into a global learning layer.

Siri: An IT Inflection Point

For many, the iPhone 4S was a bit of a disappointment. It didn’t include some of the most anticipated features, and for those, fans will have to wait a bit longer. But I view the 4S as easily the most significant IT product release since the original iPhone, the last product that served to fundamentally reshape IT and the IT industry.

I must admit up-front that I don’t even currently use Apple products (although family members do, so I have up-close experience with them). But that doesn’t matter–I’ve seen and heard enough of Siri to conclude that it’s “the big one.” I’m not saying that it is necessarily in its current incarnation a life changer for its users, but it is already shaping up to be a game changer because with the successful introduction of Siri, Apple has initiated the new competitive battlefield for the IT industry.

For the first time we have a “good enough” general-purpose and natural language interface-based AI, which means that there is no going back. As Siri and her ilk become more capable they will inevitably become a required capability on just about all computing devices and systems, and will become the dominant competitive differentiator among computer-based products. As with the Internet, it will be hard to imagine what life was like before the Siris. And what is really most important and intriguing is that it is a capability that can grow without limits–there is simply no functionality end-point in sight. She and her competitors will inevitably become increasingly more intelligent, more nuanced, more engaging, with ever more personality, and with a personality that co-evolves with their users—in short, symbiotic and indispensible.

People still often persist in talking in terms of Web 2.0 or even Web 3.0, but as I argue in The Learning Layer, seen most broadly, the IT era since about the turn of the 21st century is best thought of as the Era of Adaptation–the unifying theme being that our systems learn from their experience with us and adapt and personalize accordingly. Amazon and Google were the large-scale pioneers of this era with their product recommendations and search that adapted based on user behaviors (purchase histories and web page linking, respectively). Social networking, and most prominently Facebook, with its vastly expanded capacity for capturing behavioral information, followed. Then advertising that is targeted according to inferences of preferences from behavioral information became the standard. The iPhone revolutionized the delivery device for adaptive applications, enabling those capabilities to be delivered to users continuously. And now Siri paves the way for adaptive personalization and a wide variety of other AI-based capabilities to synergize and evolve without bounds.

From a competitive dynamics standpoint it is going to be very interesting to see how this plays out, as Siri-like capabilities combined with learning layer concepts could become the most powerful IT “lock-in” capability of all time. Once such a super-Siri builds up a history of shared experiences with you, the switching costs will be immense. It would be like losing your soul-mate. In fact, you will never again be just buying a machine, but rather, the soul in the machine. Or more realistically, an immortal soul in the cloud that outlives any individual machine. Far-fetched? Let’s check back every year or so and see. Or just ask Siri.

Social Networking and the Curse of Aristotle

The recent release and early rapid growth of Google+ has mostly been a direct consequence of social networking privacy concerns—with the Circles functionality being the key distinguishing feature versus Facebook.  Circles allows for a somewhat easier categorization of people with whom you would like to share (and gratefully only you see the categorizations in which you place people!).

What people rapidly find as their connections and number of Circles or Facebook Lists grow, however, is that the core issue isn’t so much privacy per se, but the ability to effectively and efficiently categorize at scale. A good perspective on this is Yoav Shoham’s recent blog on TechCrunch about the difficulties of manual categorization and his experience trying to categorize 300+ friends on Facebook. Circles is susceptible to the same problem—it just makes it easier and faster to run head-long into the inevitable categorization problem.

A root cause of the problem, as I harp on in The Learning Layer, rests with that purveyor of what-seems-to-be-common-sense-that-isn’t-quite-right, Aristotle.  Aristotle had the notion that an item must either fit in a category or not.  There was no maybe fits, sort of fits, or partly fits for Aristotle.  And Google+ (like Facebook and most other social networks) only enables you to compartmentalize people via the standard Aristotelian (i.e., “crisp”) set. A person is either fully in a circle/list/group or not—there is no capacity for partial inclusion.

But our brain more typically actually categorizes in accordance with non-Aristotelian, or “fuzzy” sets—that is, a person may be included in any given set by degree.  For example, someone may be sort of a friend and also a colleague, but not really a close friend, another person can be a soul mate, another mostly interested in a mutually shared hobby, etc. Sure, there are some social categories that are not fuzzy—either a person was your 12th grade classmate or not—but since non-fuzzy sets are just a special case of the more generalized fuzzy sets, fuzzy sets can gracefully handle all cases. So fuzzy sets have many advantages and this type of categorization naturally leads to fuzzy network-based structures, where relationships are by degree.  (The basic structure of our brain, not surprisingly, is a fuzzy network—the structure I therefore call “the architecture of learning” in The Learning Layer.)

But an issue with implementing in a system the reality of our social networks as fuzzy networks is that it can be hard to prescribe ahead of time sharing controls for fuzzy relations.  If we actually bothered to decide on an individual basis as to whether to share a specific item of content or posting, we would naturally do so on the basis of our nuanced, fuzzy relationships.  But that, or course, would take some consideration and time to do.

So the grand social networking bargain seems to be that for maximum expedience we either resign ourselves to share everything with everyone (what most people do on Facebook), or we employ coarse-grained non-fuzzy controls (e.g., Circles, Lists) that are a pain to set up, imprecise, and don’t scale.  Or there is another option—we cast Aristotle aside and establish and/or let the system establish a fuzzy categorization and then let our system learn from us to become an intelligent sharing proxy that shares as we would if we had time to consider fully each sharing action.  That will, of course, require trusting the system’s learning, which will necessarily have to be earned.  But ultimately that approach and the sharing everything with everyone are the only two alternatives that are durable and will scale.

Search = Recommendations

It is good to see that some artificial distinctions that have served to hamper progress in delivering truly intelligent computer interfaces are increasingly melting away. In particular, recommendations, when broadly defined, can beneficially serve as a unifying concept for a variety of computer capabilities. In The Learning Layer I defined a computer-generated recommendation as:

A recommendation is a suggestion generated by a system that is based at least in part on learning from usage behaviors.

In other words, a recommendation is an adaptive communication from the system to the user.

And I had this to say about search:

By the way, the results generated by modern Internet search engines are in practice almost always adaptive recommendations because they are influenced by behavioral information–at a minimum they use the behavioral information associated with people making links from one web page to another. This capability was the original technical breakthrough applied by Google that enabled their search engine to be so much more effective than that of their early competitors.

This feature of contemporary Internet-based search also provides a hint at the reason that the users of enterprise search whom I have talked with over the years have been so often underwhelmed by the performance of their internal searches compared with corresponding Internet versions. Historically, there has been little to no behavioral information embodied within the stored knowledge base of the enterprise, and so search inside the four walls of the business has been basically relegated to the sophisticated, but non-socially aware, pattern matching of text–similar to the way Internet search was before Google. Without social awareness, search can be a bit of a dud.

That’s why, as I mentioned in the previous blog, the computational engines behind the generation of search results and recommendations are inevitably converging, and we are therefore witnessing a voracious appetite of search engines for an ever larger and richer corpus of  behavioral information to work with—Google’s new “plus one” rating function being the most recent example.

On this same note, I just happened across this brief, recent write-up that struggles with categorizing a start-up, exemplifying the blurring of search and recommendations, and finishing with the point that, “. . . Google is also increasingly acting as a recommender system, rather than just a web search engine.” Indeed—now on to bringing this convergence to the enterprise . . .

The Recommendation Revolution, Continued . . .

That “hidden in plain sight” IT revolution of ubiquitous, adaptive recommendations continues to accelerate. Jive Software, a leader in enterprise social software, has just acquired Proximal Labs to bolster their machine learning, recommendation engine, and personalization capabilities. Given that Jive is reportedly preparing for an IPO, this move is a major endorsement of the key theme of The Learning Layer–that adaptive recommendations will be a standard part of enterprise computing. It also is another strong signal that, as I recently wrote, we are going to rapidly witness a standard, adaptive IT architecture emerge in which a layer of automated learning overlays and integrates the social layer, as well as the underlying process, content, and application layers. And I would be remiss if I didn’t mention our own initiative for bringing the learning layer to Microsoft SharePoint and other enterprise collaborative environments, Synxi, which includes a suite of auto-learning functions such as knowledge and expertise discovery, interest graph visualizations, and recommendation explanations.

Another recent and important signal of the recommendation revolution is Google’s recently announced “+1” function, an analog to Facebook’s “like” button. Google will use this tagging/rating function in generating responses to search requests. Google was the pioneer in using behavioral information to improve search results, and this is just the latest, and most dramatic, step in increasingly relying on behavioral cues and signals to deliver personalized search results that most hit the mark.

That is why, as I discussed in the book, responses to search requests should really be considered just a certain kind of adaptive recommendation, one in which more intentionality of the recommendation recipient can be inferred by virtue of the search term or phrase, but that is otherwise processed and delivered like any other type of recommendation. And it is why search processing and recommendation engines will inevitably generalize and converge.

Stay tuned . . .

The Adaptive Stack

IT always evolves by building new system layers on top of preceding layers, while concurrently abstracting away from users of these new layers extraneous details of the underlying layers. In the resulting current IT “stack,” we predominantly interact with content and application layers, and when applicable and available, process layers. And we are well on our way toward abstracting away the networking and hardware infrastructure on which our applications run—bundling that big buzzing confusion into “the cloud.”

Much more recently we have begun to add a social layer on top of our other software layers—still a work in progress in most organizations. So far, these social-based systems can more often be considered architectural bolt-ons rather than a truly integral part of the enterprise IT stack. But that is clearly destined to change.

And coming right on the heels of the social layer is the learning layer—the intelligent and adaptive integrator of the social, content, and process layers. The distinguishing characteristic of this layer is its capacity for automatic learning from the collective experiences of users and delivering the learning back to users in a variety of ways.

So this is the new IT stack that is taking shape and that summarizes the enterprise systems architecture of 2011 and beyond. And since auto-learning features promise to be an integral part of every system and device with which we interact, it is the reason that the next major era of IT is most sensibly labeled “the era of adaptation.”

As I discuss in the book, there is something qualitatively different about the combination of these last two layers of the stack—the social and learning layers—in contrast to all the layers that came before. These new layers cause the boundary between systems and people to become much more blurred—it is no longer just a command and response relationship between man and machine, but rather, a mutual learning relationship. And exactly where the learning of the system and the learning of people begins and ends is a bit fuzzy.

Perhaps then, our new stack more accurately summarizes the next generation enterprise architecture, not just the IT architecture–an enterprise architecture of a different nature than that which has come before, one in which learning and adaptation is woven throughout.

Watson: Will Zombies Inherit the Earth?

Last week we witnessed a modern-day St. Valentine’s Day massacre when yet again a computer entered a hallowed arena of human intellectual combat and trounced the best that the human race had to offer. And Watson’s Jeopardy victory was a seriously impressive feat of engineering, with many more practical business applications than the last widely publicized machine-on-man intellectual violence perpetrated by IBM, Deep Blue beating chess champion Gary Kasparov.

So give Watson its due. Still, Watson is a zombie. In The Learning Layer I describe zombie systems as those that are unable to pay attention to, and learn from, human behaviors and to adapt accordingly. And furthermore, as we all know from late night movies, zombies just do things—they don’t have the ability to articulate why they do things. Just like probably every system you have ever interacted with.

Of course, now we have the technology that could rescue Watson from the realm of the zombies by adding a capacity for social awareness, and an ability to automatically learn from this social awareness. This Watson Jr. would combine Watson Sr.’s formidable textual searching, pattern matching, and natural language processing, with social learning skills. If you work in a call center, you should already be a bit worried about Watson Sr. But Watson Jr. would be quite a formidable force in a whole lot of different job markets!

We could even go a step further and put an explanation engine into Watson Jr.  He could then explain effectively, in human terms, why, for example, he proposed one Jeopardy response versus another response. Of course, since the actual rationale would be basis all of those highly inter-related and complex pattern matching algorithms he inherited from Watson Sr., he would often find it hard to explain to us his specific reasoning. Perhaps he would humor us by just making something up that he thought we would find plausible.

Come to think of it, if we asked Ken Jennings why he guessed one response versus another response, he might have trouble articulating exactly why. Just a hunch or a feeling, perhaps. Maybe there is a little zombie in all of us . . .