Discussing the Future of Recommender Systems at RecSys2014


Maya and Kris from the Mendeley Data Science team have just returned from RecSys2014, the most important conference in the Recommender System world. RecSys is remarkable in that it attracts an equal number of participants from industry and academia, many of whom are at the forefront of innovation in their fields.

The team had a chance to exchange perspectives and experiences with various researchers, scholars and practitioners.

“To me, it was encouraging to see how top companies across the world are investing in recommenders, as they are shown to enhance customer satisfaction and bring real value to both users and companies,” says Mendeley Senior Data Scientist Maya Hristakeva. “LinkedIn reported that 50% of the connections made in their social network come from their follower recommender, while Netflix says that if they can stop 1% of users from cancelling their subscription then that’s worth $500M a year, which of course justifies the fact they are investing $150M/year in their content recommendation team, consisting of 300 people.”

But one of the advantages of such a hybrid event is that it did not shy away from addressing the broader issues, such as how to ward against creating a “filter bubble” effect, how to preserve user’s privacy, and optimising systems for what really matters (and how this can be effectively defined). Daniel Tunkelang, LinkedIn, and Xavier Amatriain, Netflix, moderated a panel on “Controversial Questions About Personalization“, tackling some of these topics head on. Hector Garcia Molina from Stanford University also put forward the view that we’ll increasingly see a convergence of recommendations, search and advertising, despite noticeable scepticism from the attendees.

Kris Jack, Chief Data Scientist at Mendeley, says one of the main messages that he took away from the conference was the importance of winning a user’s trust in the early stages of using a recommender system.

“The best systems have been shown to start off by providing recommendations that can quickly be evaluated by users as being useful before gradually introducing more novel recommendations. So in the case of helping researchers to find relevant articles to read, it’s probably best to start by recommending well known but important articles in their field, before recommending some less well known but very pertinent articles to their specific problem domain.” explains Kris. “Other important factors include reranking (the order in which recommendations should be shown), the UI design that can best support interaction with the recommender system, and the ways in which we can build context-aware recommendations.”

What do you think of the current recommendation features on Mendeley? Are there any particular ones that you’d like to see implemented? Would you like to join the team and work on making them even better? Let us know in the comments below, or Tweet the team directly @_krisjack @mayahhf and @Phil_Gooch .If you’re interested in finding out more about what the Data Science Team is developing in that arena, you can also watch their Mendeley Open Day presentation here.

 

 

Mendeley at ACM Recommender Systems 2013

 

RecSys1

By Mark Levy, Senior Data Scientist at Mendeley

Last week I had the pleasure of travelling to Hong Kong to give two workshop presentations at the ACM Recommender Systems conference.  The art and science of recommender systems have come some way since the first time that “users who like X also like Y” appeared on an e-commerce site on the internet, and this year’s conference attracted several hundred delegates from both industry and academia.  Despite its close association with customer satisfaction and the commercial bottom line, as a research topic Recommender Systems occupies a tiny and somewhat recherché niche within the computer science discipline of Machine Learning, which centres on the idea that if you present a computer program with enough examples of past events, it will be able to come up with a formula to make predictions about similar events in the future.  For a recommender system these events record the interaction of a user with an item, for example Alice watched Shaun of the Dead, or Kris read Thinking Fast And Slow, and the program’s predictions consist of suggested new books that Alice or Kris might like, or of other movies similar to Shaun of the Dead, and so on.  In our products these scenarios correspond to Mendeley Suggest, currently available only if you subscribe to a Pro, Plus or Max plan, and to the Related Research feature which we recently rolled out to all users in Mendeley Desktop.

One challenge for anyone trying to build a recommender system is that it’s hard to tell whether or not your predictions are going to be accurate, at least until you start making them and can see how often your users actually accept your suggestions.  As there is a huge space of possible methods to choose from – far too many to test every possibility on unsuspecting users – ideally we’d like to be able to figure how well each prediction formula (technically each mathematical model) matches reality before we get to that stage.  If and how that might be possible was a recurring theme of this year’s conference, and the subject of my first talk in Hong Kong.

Surprisingly for a field that has now seen several years of quite intense research interest and hundreds of peer-reviewed publications, most practitioners remain highly sceptical of the results reported even in their own research.  This made it particularly interesting to hear conference presentations from large tech companies such as Google, Microsoft, LinkedIn, Ebay, not to mention Chinese counterparts such as Douban, TenCent and AliBaba, which were new names to me but who also operate at colossal scale.  These organisations have both the scientific expertise to develop cutting edge methods and the opportunity to test the results on significant numbers of real users.  You might be surprised to learn quite how much sophisticated research has gone into recommending which game to play next on your XBox.

At Mendeley we use a great deal of wonderful open source software, and so we’re very happy that the work we did in the Data Science team for my other presentation at the conference also gave us a chance to give something back to the developer community in the form of mrec, a library written in the very popular Python programming library and intended to make it easier to do reproducible research on recommender systems, even if you’ll still need to test your new algorithm on real people to convince most of us that it actually works.