Skip to content

Our Fuzzy Social Graphs

One of the problems we often encounter with our social networks is the lack of “fuzziness” that they provide us with respect to our relationships—that is, with standard social networks you either have a relationship with another person, or you don’t. I discussed this issue in The Learning Layer:

People clearly comprise networks, and the relationships between people are not necessarily just digital in nature. We all have some relationships that are very strong, and others that are much weaker. Some people are our soul mates, some are friends, some are colleagues, and some are just acquaintances. There are shades of gray in our social relationships, just as in the case of relationships among items of content and topics. And there are different types of relationships among people, and among people and content that should be explicitly recognized. Some of these types of relationship may, in fact, be digital—for example, someone is your classmate or is not; someone is an author of an item of content or is not. But some types of relationships, such as the degree of similarity of preferences between two people, or the degree of interest a person or a group of people have with regard to a topical area, clearly will not be digital. They will be much more nuanced than that.

The inability to manage our online relationships in a more nuanced (i.e., fuzzy) fashion leads to ever bigger headaches as the scale of our social networking connections (i.e., our “social graphs”) increase. I had some comments on the way social networks have attempted to address this problem in the blog post, Social Networking and the Curse of Aristotle. At the end of the post, I mentioned that leveraging the power of machine learning provides a way for us to share activities and information in better accordance with the specific wishes we would have if we actually had the time to fully consider whether to share a particular item with each specific person to whom we are connected.

Along these lines, a recent study confirms just how well our interactions within a social network (e.g., Facebook) can be used to infer the strength of our real-world relationships. And, in fact, under the covers, Facebook’s algorithms already use this type of information to decide what to deliver in your feeds and what not to deliver. Likewise, Synxi learning layer apps do something quite similar in recommending other users or their content to users of enterprise social platforms. So machine learning is already on the job for you—your social graph is fuzzy, whether you know it or not!

Enterprise Personalized

The rise of machine learning-based personalized discovery features over the past few years is one of the biggest stories of IT. The statistics are truly staggering. For example, even as of several years ago, over 30% of Amazon’s sales were reportedly due to their personalized recommendations—the figure is no doubt even higher now. LinkedIn has reported that fully 50% of their users’ connections, group memberships, and job applications are driven by their personalized discovery features. And 75% of what people watch via Netflix is due to personalized recommendations. In addition, of course, targeted advertisements can be considered a form of personalized recommendations, as can personalized search, both of which have largely replaced their non-personalized precursors.

So in the consumer world automatic personalization has become an indispensable feature for users and a competitive imperative for providers. What about in the enterprise? Not so much—until now, that is. I wrote The Learning Layer to lay out the path toward making adaptive, personalized discovery a core feature of enterprise IT, and we at ManyWorlds are excited that our Synxi-brand technology is now making that vision a reality!

We are delivering adaptive discovery apps for the major social platforms that continuously learn from users’ experiences and apply this learning to provide users with real-time, personalized recommendations of knowledge and expertise, and which are sensitive to the context of their current activities. Even better, we also have connectors among these apps that extend a layer of learning across platforms. That means users can receive cross-contextualized and personalized recommendations of knowledge and expertise from one platform (e.g., SharePoint) based on what they are doing in another platform. And finally, we have booster products for enterprise search that enable search results to be personalized and/or additional personalized content to be recommended based on the context of a specific search result. That provides users, for the very first time, an enterprise search experience that tops the search experience internet search providers can deliver.

These learning layer technologies are collectively leading toward enterprises becoming truly personalized. And an enterprise personalized is an enterprise that is more productive, as well as being an enterprise that is more compelling to be a part of and to work with.

Just the Facts Ma’am?

Now that Siri has a bona fide competitor, Google Voice Search, a bit of a kerfuffle has emerged with regard to personalities or lack thereof of these assistants. While Siri strives to project some personality by being conversational and peppering her responses with a bit of whimsy, Google Voice Search is all about just giving us the facts. Each approach has advantages and its vocal adherents. And as the systems’ capabilities leap-frog one another with each new version, the latest incarnation of Google Voice Search seems to have gained some speed and effectiveness advantages versus the current incarnation of Siri. Of course, both of these incarnations promise to be fleeting given the pace of the respective development cycles.

Although Google labels their product “search,” the functionality has clearly already morphed more generally into a recommender—i.e., providing suggestions given a context of various of kinds. This trend is a reflection of a generalization noted in The Learning Layer—plain old search is really best considered just a recommendation in which the context is of a particular type, i.e., a search term provided by the user. The inevitable next step in general-purpose recommender technology is delivering “meta-recommendations”—that is, explanations as to why the recommendation was provided, particularly when an explanation is specifically asked for by the recommendation recipient. A capacity for a limited degree of explanatory capability has already been incorporated into the Apple and Google gals to some degree.

Then comes the really interesting advance—making the recommendations and even the explanations adaptive to the user.  That is, learning from her experiences with us to adapt her recommendations and explanations accordingly. Which is followed by one more short step in which aspects of her overall personality become adaptable to us and our particular circumstances as well. A little humor when called for, a bit of sympathy at other times; and all the while learning as to what works best and when, and tuning accordingly. I’ve got a feeling that at that point, which at the current pace of innovation, is not far away, always just providing the facts will be perceived to be somewhat stilted behavior—coming off like a cheesy movie version of AI of the 1960s.

So my guess is that there are times when, indeed, we are all Joe Friday’s, but more often than not we’ll welcome more than just the facts.

Learning from Netflix

Netflix recently published a blog that lays out some of their experiences with their recommender system.  The blog is notable in that Netflix was one of the pioneers of e-commerce recommendation engines, has one of the most famous recommendation engines, and packs a lot of details and good insights into the blog.

Here are a few takeaways:

75% of what people watch via Netflix is due to recommendations.  And given how impressively recommendations drive sales for businesses such as Amazon, it is not surprising that sophisticated recommender systems are becoming the norm in e-commerce.

“Everything is a Recommendation.”  Netflix uses this phrase to underscore the point that most of its interface now personalizes to the user. This approach is an inevitable direction for user interfaces most generally since it is clearly technically feasible and delivers business results.

Optimize for accuracy and serendipity.  People are complex and have diverse tastes, moods, etc.  A good recommender will try to help recommendation recipients keep from just re-paving their own cow paths.

Diversity of behavior types trump incremental algorithm advances.  Given already advanced algorithms, recommender improvement comes grudgingly if based on limited behavioral types (e.g., just course-grained ratings). Deriving inferences from a greater diversity of behaviors, as well as contextual cues, delivers greater advantages.

Explanations of recommendations are critically important.  Recommendations that are accompanied by explanations as to why the recommendation was delivered to the recipient are perceived to have greater levels of authority and credibility, and promote trust.

These are important takeaways for e-commerce, but they are just as applicable to enterprise adaptive discovery systems. In fact, as I discuss in The Learning Layer, because there are often more behavioral types, as well as topical structures, with which to work in the enterprise environment, adaptive systems in the enterprise have some inherent advantages versus those in purely consumer-facing environments.

I am often asked by executives about the value proposition of adaptive discovery or learning layer systems in the enterprise. A moment’s reflection on the ability to deliver the right knowledge and expertise to the right person at the right time generally suffices. But from a macro-economic standpoint, simply observing how recommendation systems have transformed e-commerce also should make the point.  On-line businesses without world-class personalized discovery capabilities simply cannot hope to compete going forward. The same lesson surely applies to systems within the enterprise.

The Era of Adaptive Education

No one seems quite happy with the current state of education. It’s too rote. Too much just teaching to the test. The system fails both the least and most capable students. Teachers are increasingly stressed. Etc. And we shouldn’t look to technology to solve all of this. I get that. However . . . two recent announcements surely signal the beginning of the end of education as we have known it.

First, there was the announcement of Knewton’s system for adapting digital textbooks and materials to the student based on continuous assessments of progress. Then there was the Apple iBook announcement. I know, I know, there are legitimate concerns about Apple’s “walled garden” approach and potential lock-in to their brand of educational process. Nevertheless, there is no going back. These two announcements, the first by a high profile start-up in conjunction with the top text book company, and the second by the world’s top tech company, coming within just a few months of one another, usher in a new era education—one in which all of the education process will adapt to the specific needs of the student. In other words, the end of Zombie education processes!

Exactly how this will play out among the various competitors and complementors in this space is hard to predict, but the train has certainly left the station. The most basic adaptive approaches will be based on personalizing the instruction in response to explicit behavioral information such as test results. However, even more nuanced approaches are inevitable, with the adaptation and personalization being based on more subtle cues from the student, and/or from peers whom are inferred to be in some way similar to the student. And it seems obvious that very shortly Siri-type natural language-based interactive capabilities will be integrated with these learning platforms.

This will have profound implications for learning, as well as for the teaching profession and its administration. It can be expected that the technology will lead, and the necessary adaptation of formal education policies and processes will tend to lag, but inexpensive, adaptive learning tools are destined to rapidly and dramatically reduce the barriers to high quality education (just as Khan Academy videos have already begun to do). Within five years a personal tutor will be available 24×7, and what we have known as education promises to blur into a global learning layer.

Siri: An IT Inflection Point

For many, the iPhone 4S was a bit of a disappointment. It didn’t include some of the most anticipated features, and for those, fans will have to wait a bit longer. But I view the 4S as easily the most significant IT product release since the original iPhone, the last product that served to fundamentally reshape IT and the IT industry.

I must admit up-front that I don’t even currently use Apple products (although family members do, so I have up-close experience with them). But that doesn’t matter–I’ve seen and heard enough of Siri to conclude that it’s “the big one.” I’m not saying that it is necessarily in its current incarnation a life changer for its users, but it is already shaping up to be a game changer because with the successful introduction of Siri, Apple has initiated the new competitive battlefield for the IT industry.

For the first time we have a “good enough” general-purpose and natural language interface-based AI, which means that there is no going back. As Siri and her ilk become more capable they will inevitably become a required capability on just about all computing devices and systems, and will become the dominant competitive differentiator among computer-based products. As with the Internet, it will be hard to imagine what life was like before the Siris. And what is really most important and intriguing is that it is a capability that can grow without limits–there is simply no functionality end-point in sight. She and her competitors will inevitably become increasingly more intelligent, more nuanced, more engaging, with ever more personality, and with a personality that co-evolves with their users—in short, symbiotic and indispensible.

People still often persist in talking in terms of Web 2.0 or even Web 3.0, but as I argue in The Learning Layer, seen most broadly, the IT era since about the turn of the 21st century is best thought of as the Era of Adaptation–the unifying theme being that our systems learn from their experience with us and adapt and personalize accordingly. Amazon and Google were the large-scale pioneers of this era with their product recommendations and search that adapted based on user behaviors (purchase histories and web page linking, respectively). Social networking, and most prominently Facebook, with its vastly expanded capacity for capturing behavioral information, followed. Then advertising that is targeted according to inferences of preferences from behavioral information became the standard. The iPhone revolutionized the delivery device for adaptive applications, enabling those capabilities to be delivered to users continuously. And now Siri paves the way for adaptive personalization and a wide variety of other AI-based capabilities to synergize and evolve without bounds.

From a competitive dynamics standpoint it is going to be very interesting to see how this plays out, as Siri-like capabilities combined with learning layer concepts could become the most powerful IT “lock-in” capability of all time. Once such a super-Siri builds up a history of shared experiences with you, the switching costs will be immense. It would be like losing your soul-mate. In fact, you will never again be just buying a machine, but rather, the soul in the machine. Or more realistically, an immortal soul in the cloud that outlives any individual machine. Far-fetched? Let’s check back every year or so and see. Or just ask Siri.

Brain Pattern-based Recommender Systems–Coming Soon?

Now things are going to start to get real interesting! In The Learning Layer, I categorized the types of behaviors that could be used by recommender systems to infer people’s preferences and interests. The categories I described are:

  1.  Navigation and Access Behaviors (e.g., click streams, searches, transactions)
  2. Collaborative Behaviors (e.g., person-to-person communications, broadcasts, contributing comments and content)
  3. Reference Behaviors (e.g., saving, tagging, or organizing information)
  4. Direct Feedback Behaviors (e.g., ratings, comments)
  5. Self-Profiling and Subscription Behaviors (e.g., personal attributes and affiliations, subscriptions to topics and people)
  6. Physical Location and Environment (e.g., location relative to physical objects and people, lighting levels, local weather conditions)

Various subsets of these categories are already being used in a variety of systems to provide intelligent, personalized recommendations, with location-awareness being perhaps the most recent behavioral information to be leveraged, and the sensing of environmental conditions and the incorporation of that information in recommendation engines representing the very leading edge.

But I also described an intriguing, to say the least, seventh category–monitored attention and physiological response behaviors. This category includes the monitoring of extrinsic behaviors such as gaze, gestures, movements, as well as more intrinsic physiological “behaviors” such as heart rate, galvanic responses, and brain wave patterns and imaging. Exotic and futuristic stuff to be sure. But apparently it may not be as far off as one might think to actually be put into widespread practice, given this new advance in smart phone brain scanner systems.

Sure, it will take time for this type of technology to be cost effective, user-friendly, scale to the mass market, etc. But can there be any doubt that it will eventually play a role in providing information to our intelligent recommender systems?

Social Networking and the Curse of Aristotle

The recent release and early rapid growth of Google+ has mostly been a direct consequence of social networking privacy concerns—with the Circles functionality being the key distinguishing feature versus Facebook.  Circles allows for a somewhat easier categorization of people with whom you would like to share (and gratefully only you see the categorizations in which you place people!).

What people rapidly find as their connections and number of Circles or Facebook Lists grow, however, is that the core issue isn’t so much privacy per se, but the ability to effectively and efficiently categorize at scale. A good perspective on this is Yoav Shoham’s recent blog on TechCrunch about the difficulties of manual categorization and his experience trying to categorize 300+ friends on Facebook. Circles is susceptible to the same problem—it just makes it easier and faster to run head-long into the inevitable categorization problem.

A root cause of the problem, as I harp on in The Learning Layer, rests with that purveyor of what-seems-to-be-common-sense-that-isn’t-quite-right, Aristotle.  Aristotle had the notion that an item must either fit in a category or not.  There was no maybe fits, sort of fits, or partly fits for Aristotle.  And Google+ (like Facebook and most other social networks) only enables you to compartmentalize people via the standard Aristotelian (i.e., “crisp”) set. A person is either fully in a circle/list/group or not—there is no capacity for partial inclusion.

But our brain more typically actually categorizes in accordance with non-Aristotelian, or “fuzzy” sets—that is, a person may be included in any given set by degree.  For example, someone may be sort of a friend and also a colleague, but not really a close friend, another person can be a soul mate, another mostly interested in a mutually shared hobby, etc. Sure, there are some social categories that are not fuzzy—either a person was your 12th grade classmate or not—but since non-fuzzy sets are just a special case of the more generalized fuzzy sets, fuzzy sets can gracefully handle all cases. So fuzzy sets have many advantages and this type of categorization naturally leads to fuzzy network-based structures, where relationships are by degree.  (The basic structure of our brain, not surprisingly, is a fuzzy network—the structure I therefore call “the architecture of learning” in The Learning Layer.)

But an issue with implementing in a system the reality of our social networks as fuzzy networks is that it can be hard to prescribe ahead of time sharing controls for fuzzy relations.  If we actually bothered to decide on an individual basis as to whether to share a specific item of content or posting, we would naturally do so on the basis of our nuanced, fuzzy relationships.  But that, or course, would take some consideration and time to do.

So the grand social networking bargain seems to be that for maximum expedience we either resign ourselves to share everything with everyone (what most people do on Facebook), or we employ coarse-grained non-fuzzy controls (e.g., Circles, Lists) that are a pain to set up, imprecise, and don’t scale.  Or there is another option—we cast Aristotle aside and establish and/or let the system establish a fuzzy categorization and then let our system learn from us to become an intelligent sharing proxy that shares as we would if we had time to consider fully each sharing action.  That will, of course, require trusting the system’s learning, which will necessarily have to be earned.  But ultimately that approach and the sharing everything with everyone are the only two alternatives that are durable and will scale.

Does Personalization Pave the Cow Paths?

Michael Hammer, the father of business reengineering, famously used the phrase “paving the cow paths” to describe the ritualizing of inefficient business practices. Now the pervasiveness of personalization of our systems is being accused of paving our cow paths by continuously reinforcing our narrow interests at the expense of exposing us to other points of view. This latest apocalyptic image being painted is one of a world where we are all increasingly locked into our parochial, polarized perspectives as the machine feeds us only what we want to hear. Big Brother turns out to be an algorithm.

I commented on this over at Greg Linden’s blog, but wanted to expand on those thoughts a bit here. Of course, I could first point out the irony that I only became aware of the Eli Pariser’s book about the perils of personalization, The Filter Bubble, through a personalized RSS feed, but I will move on to more substantive points—the main one being that it seems to me that a straw man, highly naïve personalization capability has been constructed to use as the primary foil of the criticism. Does such relatively crude personalization occur today and are some of the concerns, while overblown, valid? Yes. Are these relatively unsophisticated personalization functions likely to remain the state-of-the-art for long? Nope.

As I discuss in The Learning Layer, an advanced personalization capability includes the following features:

  1. A user-controlled tuning function that enables a user to explicitly adjust the “narrowness” of inference of the user’s interests in generating recommendations
  2. An “experimentation” capability within the recommendation algorithm to at least occasionally take the user outside her typical inferred areas of interest
  3. A recommendation explanation function that provides the user with the rationale for the recommendation, including a level of confidence the system has in making the explanation, and an indication when a recommendation is provided that is intentionally outside of the normal areas of interest

And by the way, there are actually two reasons to deliver the occasional experimental recommendation: first, yes, to subtly encourage the user to broaden her horizons, but less obviously, to also enable the recommendation algorithm to gain more information than it would otherwise have, enabling it to develop both a broader and a finer-grained perspective of the user’s “interest space.” This allows for increasingly sophisticated, nuanced, and beneficially serendipitous recommendations. As The Learning Layer puts it:

. . . the wise system will also sometimes take the user a bit off of her well-worn paths. Think of it as the system running little experiments. Only by “taking a jump” with some of these types of experimental recommendations every now and then can the system fine-tune its understanding of the user and get a feel for potential changes in tastes and preferences. The objective of every interaction of the socially aware system is to find the right balance of providing valuable learning to the user in the present, while also interacting so as to learn more about the user in order to become even more useful in the future. It takes a deft touch.

A deft touch indeed, but also completely doable and an inevitable feature of future personalization algorithms.

I’ve got to admit, my first reaction when I see yet another in a long line of hand wringing stories of how the Internet is making us stupider, into gadgets, amplifying our prejudices, etc., is to be dismissive. After all, amidst the overwhelmingly obvious advantages we gain from advances in technology (a boring “dog bites man” story), the opportunity is to sell a “man bites dog” negative counter-view. These stories invariable have two common themes: a naïve extrapolation from the current state of the technology and an assumption people are at best just passive entities, and at worst complete fools. History has shown these to ultimately be bad assumptions, and hence, the resulting stories cannot be taken too seriously.

On the other hand, looking beyond the “Chicken Little” part of these stories, there are some nuggets of insight that those of us building adaptive technologies can learn from. And a lesson from this latest one is that the type of more advanced auto-learning and recommendation capabilities featured in The Learning Layer is an imperative in avoiding a bad case of paving-the-cow-paths syndrome.

Our Conceit of Consciousness

MIT recently held a symposium called “Brains, Minds, and Machines,” which took stock of the current state of cognitive science and machine learning, as well as debating directions for the next generation of advances. The symposium kicked-off with perspectives from some of the various lions of the field of cognitive science such as Marvin Minsky and Noam Chomsky.

A perspective by Chomsky lays down the gauntlet with regard to today’s competing schools of AI development and directions:

Chomsky derided researchers in machine learning who use purely statistical methods to produce behavior that mimics something in the world, but who don’t try to understand the meaning of that behavior. Chomsky compared such researchers to scientists who might study the dance made by a bee returning to the hive, and who could produce a statistically based simulation of such a dance without attempting to understand why the bee behaved that way. “That’s a notion of [scientific] success that’s very novel. I don’t know of anything like it in the history of science,” said Chomsky.

Of course, Chomsky is tacitly assuming that “meaning” and truly understanding natural language is something much more than just statistics. But is it really? That certainly seems like common sense. On the other hand, if we look closely at the brain, all we see are networks of neurons firing in statistical patterns. The meaning somehow emerges from the statistics.

I suspect what Chomsky really wants is an “explanation engine”–an explanatory facility that can convincingly explain itself to us, presenting to us many layers of richly nuanced reasoning. Patrick Winston at the same symposium said as much:

Winston speculated that the magic ingredient that makes humans unique is our ability to create and understand stories using the faculties that support language: “Once you have stories, you have the kind of creativity that makes the species different to any other.”

This capability has been the goal of AI from the beginning, and 50 years later, the fact that such a capability has still not been delivered has clearly not been for lack of trying. I would argue that this perceived failure is a consequence of the early AI community falling prey to the “conceit of consciousness” that we are all prone to. We humans believe that we are our language-based explanation engine, and we therefore literally tell and convince ourselves that true meaning is solely a product of the conscious, language-based reasoning capacity of our explanation engines.

But that ain’t exactly the way nature did it, as Winston implicitly acknowledges. Inarticulate inferencing and decision making capabilities evolved over the course of billions of years and work quite well, thank you. Only very recently did we humans, apparently uniquely, become endowed with a very powerful explanation engine that provides a rationale (and often a rationalization!) for the decisions already made by our unconscious intelligence—an endowment most probably for the primary purpose of delivering compact communications to others rather than for the purpose of improving our individual decision making.

So to focus first on the explanation engine is getting things exactly backward in trying to develop machine intelligence. To recapitulate evolution, we first need to build intelligent systems that generate good inferences and decisions from large amounts of data, just like we humans continuously and unconsciously do. And like it or not, we can only do so by applying those inscrutable, inarticulate, complex, messy, math-based methods. With this essential foundation in place, we can then (mostly for the sake of our own conceit of consciousness!) build useful explanatory engines on top of the highly intelligent unconsciousness.

So I agree with Chomsky and Winston that now is indeed a fruitful time to build explanation engines—not because AI directions have been misguided for the past decade or two, but rather because we have actually come such a long way with the much maligned data-driven, statistical approach to AI, and because there is a clear path to doing even more wonderful things with this approach. Unlike 50 years ago our systems are beginning to actually have something interesting to say to us; so by all means, let us help them begin to do so!

Follow

Get every new post delivered to your Inbox.