Skip to content

Posts from the ‘Personalization’ Category

Our Fuzzy Social Graphs

One of the problems we often encounter with our social networks is the lack of “fuzziness” that they provide us with respect to our relationships—that is, with standard social networks you either have a relationship with another person, or you don’t. I discussed this issue in The Learning Layer:

People clearly comprise networks, and the relationships between people are not necessarily just digital in nature. We all have some relationships that are very strong, and others that are much weaker. Some people are our soul mates, some are friends, some are colleagues, and some are just acquaintances. There are shades of gray in our social relationships, just as in the case of relationships among items of content and topics. And there are different types of relationships among people, and among people and content that should be explicitly recognized. Some of these types of relationship may, in fact, be digital—for example, someone is your classmate or is not; someone is an author of an item of content or is not. But some types of relationships, such as the degree of similarity of preferences between two people, or the degree of interest a person or a group of people have with regard to a topical area, clearly will not be digital. They will be much more nuanced than that.

The inability to manage our online relationships in a more nuanced (i.e., fuzzy) fashion leads to ever bigger headaches as the scale of our social networking connections (i.e., our “social graphs”) increase. I had some comments on the way social networks have attempted to address this problem in the blog post, Social Networking and the Curse of Aristotle. At the end of the post, I mentioned that leveraging the power of machine learning provides a way for us to share activities and information in better accordance with the specific wishes we would have if we actually had the time to fully consider whether to share a particular item with each specific person to whom we are connected.

Along these lines, a recent study confirms just how well our interactions within a social network (e.g., Facebook) can be used to infer the strength of our real-world relationships. And, in fact, under the covers, Facebook’s algorithms already use this type of information to decide what to deliver in your feeds and what not to deliver. Likewise, Synxi learning layer apps do something quite similar in recommending other users or their content to users of enterprise social platforms. So machine learning is already on the job for you—your social graph is fuzzy, whether you know it or not!

Enterprise Personalized

The rise of machine learning-based personalized discovery features over the past few years is one of the biggest stories of IT. The statistics are truly staggering. For example, even as of several years ago, over 30% of Amazon’s sales were reportedly due to their personalized recommendations—the figure is no doubt even higher now. LinkedIn has reported that fully 50% of their users’ connections, group memberships, and job applications are driven by their personalized discovery features. And 75% of what people watch via Netflix is due to personalized recommendations. In addition, of course, targeted advertisements can be considered a form of personalized recommendations, as can personalized search, both of which have largely replaced their non-personalized precursors.

So in the consumer world automatic personalization has become an indispensable feature for users and a competitive imperative for providers. What about in the enterprise? Not so much—until now, that is. I wrote The Learning Layer to lay out the path toward making adaptive, personalized discovery a core feature of enterprise IT, and we at ManyWorlds are excited that our Synxi-brand technology is now making that vision a reality!

We are delivering adaptive discovery apps for the major social platforms that continuously learn from users’ experiences and apply this learning to provide users with real-time, personalized recommendations of knowledge and expertise, and which are sensitive to the context of their current activities. Even better, we also have connectors among these apps that extend a layer of learning across platforms. That means users can receive cross-contextualized and personalized recommendations of knowledge and expertise from one platform (e.g., SharePoint) based on what they are doing in another platform. And finally, we have booster products for enterprise search that enable search results to be personalized and/or additional personalized content to be recommended based on the context of a specific search result. That provides users, for the very first time, an enterprise search experience that tops the search experience internet search providers can deliver.

These learning layer technologies are collectively leading toward enterprises becoming truly personalized. And an enterprise personalized is an enterprise that is more productive, as well as being an enterprise that is more compelling to be a part of and to work with.

Learning from Netflix

Netflix recently published a blog that lays out some of their experiences with their recommender system.  The blog is notable in that Netflix was one of the pioneers of e-commerce recommendation engines, has one of the most famous recommendation engines, and packs a lot of details and good insights into the blog.

Here are a few takeaways:

75% of what people watch via Netflix is due to recommendations.  And given how impressively recommendations drive sales for businesses such as Amazon, it is not surprising that sophisticated recommender systems are becoming the norm in e-commerce.

“Everything is a Recommendation.”  Netflix uses this phrase to underscore the point that most of its interface now personalizes to the user. This approach is an inevitable direction for user interfaces most generally since it is clearly technically feasible and delivers business results.

Optimize for accuracy and serendipity.  People are complex and have diverse tastes, moods, etc.  A good recommender will try to help recommendation recipients keep from just re-paving their own cow paths.

Diversity of behavior types trump incremental algorithm advances.  Given already advanced algorithms, recommender improvement comes grudgingly if based on limited behavioral types (e.g., just course-grained ratings). Deriving inferences from a greater diversity of behaviors, as well as contextual cues, delivers greater advantages.

Explanations of recommendations are critically important.  Recommendations that are accompanied by explanations as to why the recommendation was delivered to the recipient are perceived to have greater levels of authority and credibility, and promote trust.

These are important takeaways for e-commerce, but they are just as applicable to enterprise adaptive discovery systems. In fact, as I discuss in The Learning Layer, because there are often more behavioral types, as well as topical structures, with which to work in the enterprise environment, adaptive systems in the enterprise have some inherent advantages versus those in purely consumer-facing environments.

I am often asked by executives about the value proposition of adaptive discovery or learning layer systems in the enterprise. A moment’s reflection on the ability to deliver the right knowledge and expertise to the right person at the right time generally suffices. But from a macro-economic standpoint, simply observing how recommendation systems have transformed e-commerce also should make the point.  On-line businesses without world-class personalized discovery capabilities simply cannot hope to compete going forward. The same lesson surely applies to systems within the enterprise.

Does Personalization Pave the Cow Paths?

Michael Hammer, the father of business reengineering, famously used the phrase “paving the cow paths” to describe the ritualizing of inefficient business practices. Now the pervasiveness of personalization of our systems is being accused of paving our cow paths by continuously reinforcing our narrow interests at the expense of exposing us to other points of view. This latest apocalyptic image being painted is one of a world where we are all increasingly locked into our parochial, polarized perspectives as the machine feeds us only what we want to hear. Big Brother turns out to be an algorithm.

I commented on this over at Greg Linden’s blog, but wanted to expand on those thoughts a bit here. Of course, I could first point out the irony that I only became aware of the Eli Pariser’s book about the perils of personalization, The Filter Bubble, through a personalized RSS feed, but I will move on to more substantive points—the main one being that it seems to me that a straw man, highly naïve personalization capability has been constructed to use as the primary foil of the criticism. Does such relatively crude personalization occur today and are some of the concerns, while overblown, valid? Yes. Are these relatively unsophisticated personalization functions likely to remain the state-of-the-art for long? Nope.

As I discuss in The Learning Layer, an advanced personalization capability includes the following features:

  1. A user-controlled tuning function that enables a user to explicitly adjust the “narrowness” of inference of the user’s interests in generating recommendations
  2. An “experimentation” capability within the recommendation algorithm to at least occasionally take the user outside her typical inferred areas of interest
  3. A recommendation explanation function that provides the user with the rationale for the recommendation, including a level of confidence the system has in making the explanation, and an indication when a recommendation is provided that is intentionally outside of the normal areas of interest

And by the way, there are actually two reasons to deliver the occasional experimental recommendation: first, yes, to subtly encourage the user to broaden her horizons, but less obviously, to also enable the recommendation algorithm to gain more information than it would otherwise have, enabling it to develop both a broader and a finer-grained perspective of the user’s “interest space.” This allows for increasingly sophisticated, nuanced, and beneficially serendipitous recommendations. As The Learning Layer puts it:

. . . the wise system will also sometimes take the user a bit off of her well-worn paths. Think of it as the system running little experiments. Only by “taking a jump” with some of these types of experimental recommendations every now and then can the system fine-tune its understanding of the user and get a feel for potential changes in tastes and preferences. The objective of every interaction of the socially aware system is to find the right balance of providing valuable learning to the user in the present, while also interacting so as to learn more about the user in order to become even more useful in the future. It takes a deft touch.

A deft touch indeed, but also completely doable and an inevitable feature of future personalization algorithms.

I’ve got to admit, my first reaction when I see yet another in a long line of hand wringing stories of how the Internet is making us stupider, into gadgets, amplifying our prejudices, etc., is to be dismissive. After all, amidst the overwhelmingly obvious advantages we gain from advances in technology (a boring “dog bites man” story), the opportunity is to sell a “man bites dog” negative counter-view. These stories invariable have two common themes: a naïve extrapolation from the current state of the technology and an assumption people are at best just passive entities, and at worst complete fools. History has shown these to ultimately be bad assumptions, and hence, the resulting stories cannot be taken too seriously.

On the other hand, looking beyond the “Chicken Little” part of these stories, there are some nuggets of insight that those of us building adaptive technologies can learn from. And a lesson from this latest one is that the type of more advanced auto-learning and recommendation capabilities featured in The Learning Layer is an imperative in avoiding a bad case of paving-the-cow-paths syndrome.