RecSys2012 Conference – a personal summary

I attended the RecSys conference 2012 in Dublin. I summarise my personal impressions in this blog post. First of all, the conference was well organised and I met many bright people. I got some really new ideas and inspiring thoughts for future work in the field of recommender systems.

My conclusions first:

Conclusions and lessons learned
The recommendation research ecosystem became diverse and covers many different aspects. In particular, the community shifted from a pure algorithmic point of view to a more broader scope. That’s a promising move.

However, I think we still miss enough interdisciplinary. In particular when it comes to topics like decision making processes in recommender systems etc. Science branches like sociology, psychology should be explored for useful results to better model and understand recommendation system users. We should try to “operate” more on the interfaces between different research areas, e.g., how can we combine results from social network analysis with user generated opinions in recommender systems?

Another take a way is the following: real world recommenders like the ones from LinkedIn, Netflix etc. have a huge amount of context related data and signals from user behaviour. These data are essential to drive a recommender system successfully. However, academic researchers lack of such data. Because the pure algorithmic aspect of a successful recommendation system contributes only with a small percentage it is hard for academic researchers to make useful contributions for the industry. How can we solve this problem? It  might be true that special agreements between a company and a research unit are possible, but that’s a hard way to go most of the time. I think from an application point of view it is essential to think about solutions.

Our own contribution “Recommendation systems in the scope of opinion formation: a model” to the Conference was a talk in the decision@recsys2012 workshop.

Conference Summary
The conference was divided in three part: 1) workshops 2) paper sessions and tutorials 3) industry sessions. I attended two workshops, the full paper sessions, some of the tutorials, and part of the industry session. Because I left the conference on Thursday morning I could not attend the RecSys data challenge part.

Workshops
The workshops took place in parallel. So it was not possible to attend all of them. I gave a talk at the workshop “Human Decision Making in Recommender Systems” and attended the second part of the “RUE” workshop and the panel discussion of the “CARS” (Context-Aware Recommender Systems) workshop. The contributions were of good quality in all workshops. However, it seems that people still investigate recommender systems as isolated systems, where only the interactions between users and the system are taken into account. This of course can only be a zeroth  approximation. People are not only influenced by the systems’ recommendations but also by external sources like advertisement, peer communication etc. From the talks it is obvious that researchers shifted definitely the focus from a narrow accuracy view to a more broader and user centric approach. Things like user neutrality and bias were investigated and first results presented. The panel discussion at the end of the “CARS” workshop was very interesting. E.g., LinkedIn uses a bunch of different signals from their users to compile their recommendations. To adjust their system they do so called controlled user experiments, where they try to figure out what system changes have which impact to their user base. Then they adjust their system by taking into account their measurement results. They clearly outlined that compiling recommendations in the wild (from a real system for real users) is much more then just some fancy matrix operations. It requires a careful observation of users and flexibility to adapt to their behaviour.

As a researcher I would like to have more of those context data. It is very difficult to contribute and improve recommendation systems from a contextual point of view without such real data. Here I see a problem in general. Researchers from academia will have more and more difficulties to produce valuable results for industry because context is so important but it is hard to get such data.

Paper sessions and tutorials
The keynote given by Jure Leskovec was very inspiring “How Users Evaluate Things and Each Other in Social Media“.  As a physicist I liked Jure’s simple model approach (clean, controllable, and with explanation power) and he give a striking statement “Recommender Systems drive the Web”.
Xavier Amatriain gave a master piece of a presentation, “Building Industrial-scale Real-world Recommender Systems”. And again it was very impressive how many different signals real world recommenders have to take into account to compile useful suggestions. The credo: measure everything and use the important signals only. The tutorial “Conducting User Experiments in Recommender Systems“ given by Bart Knijnenburg was well presented but I think he addressed too many things for the available time. Bart votes for more serious statistics and design in user experiments and he has a point. On the other hand I doubt that user experiments are that straight forward like he presented. The presented papers covered a wide range of different topics in the recommender research ecosystem, e.g., social recommendations, user feedback evaluations, learning how to rank etc. A lot of use cases were presented showing that many research results find their way down to applications! I especially enjoyed the talk “Finding a needle in a haystack of reviews: cold start context based hotel recommender system” because they used a spin glass approach to detect communities. Nice to see, how models and ideas make their way into the recommender research community!

The paper “CLiMF: Learning to Maximize Reciprocal Rank with Collaborative Less-is-More Filtering” was awarded as the best long paper. The best short paper “Using Graph Partitioning Techniques for Neighbour Selection in User-Based Collaborative Filtering” is a nice confirmation of our own research work published in 2007.

Industry session
The highlight for me was the keynote “Online Controlled Experiments: Introduction, Learnings, and Humbling Statistics”  given by Ron Kohavi from Microsoft. The message was: test and learn from your user base experiments but be aware of some pit falls. He outlined how hard it is to trust to his own figures obtained from user base measurements. He suggests not only doing A/B tests all the time but also A/A tests to ensure consistency in the experimental design.
The industry session gave useful insights about the different kind of user signals taken into account to drive a good recommender systems. But here again: for pure researchers it is hard to get relevant data, like user interaction and user behaviour from real world recommendation applications. This makes it hard to contribute in a direct way proposing new methods or improvements to existing ones. The industry asks questions to researchers but answers can only be given with the relevant data base.

About Blattner

Head Laboratory for Web Science.
This entry was posted in misc, recommender_systems and tagged , . Bookmark the permalink.

2 Responses to RecSys2012 Conference – a personal summary

  1. Pingback: ACM Recommender Systems #Recsys2012 | Data Science London

  2. Pingback: Yet Another RecSys 2012 Summary « alan said

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s