LOGIN TO YOUR ACCOUNT

Username
Password
Remember Me
Or use your Academic/Social account:

CREATE AN ACCOUNT

Or use your Academic/Social account:

Congratulations!

You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.

Important!

Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message

CREATE AN ACCOUNT

Name:
Username:
Password:
Verify Password:
E-mail:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Hopfgartner, Frank; Kille, Benjamin; Lommatzsch, Andreas; Plumbaum, Till; Brodt, Torben; Heintz, Tobias (2014)
Publisher: Springer Verlag
Languages: English
Types: Other
Subjects: Z665
Most user-centric studies of information access systems in literature suffer from unrealistic settings or limited numbers of users who participate in the study. In order to address this issue, the idea of a living lab has been promoted. Living labs allow us to evaluate research hypotheses using a large number of users who satisfy their information need in a real context. In this paper, we introduce a living lab on news recommendation in real time. The living lab has first been organized as News Recommendation Challenge at ACM RecSys’13 and then as campaign-style evaluation lab NEWSREEL at CLEF’14. Within this lab, researchers were asked to provide news article recommendations to millions of users in real time. Different from user studies which have been performed in a laboratory, these users are following their own agenda. Consequently, laboratory bias on their behavior can be neglected. We outline the living lab scenario and the experimental setup of the two benchmarking events. We argue that the living lab can serve as reference point for the implementation of living labs for the evaluation of information access systems.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • 1. G. Adomavicius and Young Ok Kwon. Improving aggregate recommendation diversity using ranking-based techniques. Knowledge and Data Engineering, 24(5):896{ 911, may 2012.
    • 2. James Allan. Hard track overview in trec 2003: High accuracy retrieval from documents. In TREC, pages 24{37, 2003.
    • 3. Xavier Amatriain. Mining large streams of user data for personalized recommendations. ACM SIGKDD Explorations Newsletter, 14(2):37, April 2013.
    • 4. Leif Azzopardi and Krisztian Balog. Towards a living lab for information retrieval research and development - a proposal for a living lab for product search tasks. In CLEF, pages 26{37, 2011.
    • 5. Krisztian Balog, David Elsweiler, Evangelos Kanoulas, Liadh Kelly, and Mark Smucker. Report on the cikm workshop on living labs for information retrieval evaluation. SIGIR Forum, 48(1), 2014.
    • 6. Nicholas J. Belkin. Some(what) grand challenges for information retrieval. In ECIR, page 1, 2008.
    • 7. James Bennett and Stan Lanning. The net ix prize. In KDDCup'07, 2007.
    • 8. Torben Brodt and Frank Hopfgartner. Shedding Light on a Living Lab: The CLEF NEWSREEL Open Recommendation Platform. In Proceedings of the Information Interaction in Context conference, IIiX'14. Springer-Verlag, 2014. to appear.
    • 9. Cyril Cleverdon, Jack Mills, and Michael Keen. Factors determining the performance of indexing systems. Technical report, ASLIB Cran eld project, Cran eld, 1966.
    • 10. Paul Clough and Mark Sanderson. Evaluating the performance of information retrieval systems using test collections. Information Research, 18(2), 2013.
    • 11. Gideon Dror, Noam Koenigstein, Yehuda Koren, and Markus Weimer. The Yahoo! Music Dataset and KDD-Cup. In JMLR: Workshop and Conference Proceedings, pages 3{18, 2012.
    • 12. Susan Dumais and Nickolas Belkin. The trec interactive tracks: Putting the user into search. In TREC, 2005.
    • 13. Cagdas Esiyok, Benjamin Kille, Brijnesh Johannes Jain, Frank Hopfgartner, and Sahin Albayrak. Users' reading habits in online news portals. In IIiX'14: Proceedings of Information Interaction in Context Conference. ACM, 08 2014. to appear.
    • 14. Ken Goldberg, Theresa Roeder, Dhruv Gupta, and Chris Perkins. Eigentaste: A constant time collaborative ltering algorithm. Information Retrieval, 4(2):133{ 151, 2001.
    • 15. Jonathan L. Herlocker, Joseph A. Konstan, Al Borchers, and John Riedl. An algorithmic framework for performing collaborative ltering. In Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR '99, pages 230{237. ACM, 1999.
    • 16. Jonathan L. Herlocker, Joseph a. Konstan, Loren G. Terveen, and John T. Riedl. Evaluating collaborative ltering recommender systems. ACM Transactions on Information Systems, 22(1):5{53, 2004.
    • 17. Frank Hopfgartner and Joemon M. Jose. Semantic user pro ling techniques for personalised multimedia recommendation. Multimedia Syst., 16(4-5):255{274, 2010.
    • 18. Melody Y. Ivory and Marti A. Hearst. The state of the art in automating usability evaluation of user interfaces. ACM Comput. Surv., 33(4):470{516, 2001.
    • 19. Jaap Kamps, Shlomo Geva, Carol Peters, Tetsuya Sakai, Andrew Trotman, and Ellen M. Voorhees. Report on the sigir 2009 workshop on the future of ir evaluation. SIGIR Forum, 43(2):13{23, 2009.
    • 20. Diane Kelly, Susan T. Dumais, and Jan O. Pedersen. Evaluation challenges and directions for information-seeking support systems. IEEE Computer, 42(3):60{66, 2009.
    • 21. Benjamin Kille, Frank Hopfgartner, Torben Brodt, and Tobias Heintz. The plista dataset. In NRS'13: Proceedings of the International Workshop and Challenge on News Recommender Systems, pages 14{21. ACM, 10 2013.
    • 22. Joseph Konstan and John Riedl. Recommender systems: from algorithms to user experience. User Modeling and User-Adapted Interaction, 22(1-2):101{123, 2012.
    • 23. Andreas Lommatzsch. Real-time news recommendations using context-aware ensembles. In ECIR'14: Proceedings of the 36th European conference on Advances in information retrieval, ECIR'14, pages 51{62. Springer-Verlag, 2014.
    • 24. Andreas Lommatzsch, Till Plumbaum, and Sahin Albayrak. A linked dataverse knows better: Boosting recommendation quality using semantic knowledge. In Proc. of the 5th Intl. Conf. on Advances in Semantic Processing, pages 97 { 103, Wilmington, DE, USA, 2011. IARIA.
    • 25. Michael J. Pazzani and Daniel Billsus. Content-based recommendation systems. In Peter Brusilovsky, Alfred Kobsa, and Wolfgang Nejdl, editors, The Adaptive Web, volume 4321 of Lecture Notes in Computer Science, pages 325{341. Springer Berlin Heidelberg, 2007.
    • 26. Owen Phelan, Kevin McCarthy, and Barry Smyth. Using twitter to recommend real-time topical news. In Proceedings of the Third ACM Conference on Recommender Systems, RecSys '09, pages 385{388, New York, NY, USA, 2009. ACM.
    • 27. Peter Pirolli. Powers of 10: Modeling complex information-seeking systems at multiple scales. IEEE Computer, 42(3):33{40, 2009.
    • 28. Alan Said, Jimmy Lin, Alejandro Bellog n, and Arjen de Vries. A month in the life of a production news recommender system. In Proceedings of the 2013 Workshop on Living Labs for Information Retrieval Evaluation, LivingLab '13, pages 7{10. ACM, 2013.
    • 29. Badrul M. Sarwar, George Karypis, Joseph A. Konstan, and John Riedl. Itembased collaborative ltering recommendation algorithms. In WWW, pages 285{ 295, 2001.
    • 30. Guy Shani and Asela Gunawardana. Evaluating recommendation systems. In Recommender systems handbook, pages 257{297. Springer, 2011.
    • 31. Mozhgan Tavakolifard, Jon Atle Gulla, Kevin C. Almeroth, Frank Hopfgartner, Benjamin Kille, Till Plumbaum, Andreas Lommatzsch, Torben Brodt, Arthur Bucko, and Tobias Heintz. Workshop and challenge on news recommender systems. In RecSys'13: Proceedings of the International ACM Conference on Recommender Systems. ACM, 10 2013.
    • 32. TNS Opinion & Social. Special Eurobarometer 386 { Europeans and their Languages. Technical report, European Commission, 2012.
    • 33. David Vallet, Frank Hopfgartner, and Joemon Jose. Use of implicit graph for recommending relevant videos: a simulated evaluation. In Proceedings of the IR research, 30th European conference on Advances in information retrieval, ECIR'08, pages 199{210, Berlin, Heidelberg, 2008. Springer-Verlag.
    • 34. Ellen M. Voorhees and Donna K. Harman. TREC: Experiment and Evaluation in Information Retrieval. MIT Press, Cambridge, MA, USA, 1 edition, 2005.
    • 35. Cai-Nicolas Ziegler, Sean M. McNee, Joseph A. Konstan, and Georg Lausen. Improving recommendation lists through topic diversi cation. In WWW'05, pages 22{32. ACM, 2005.
  • No related research data.
  • No similar publications.

Share - Bookmark

Download from

Funded by projects

  • EC | CROWDREC

Cite this article