RARD I: The Related-Article Recommender-System Dataset

RARD: The Related-Article Recommendation Dataset

We are proud to announce the release of ‘RARD’, the related-article recommendation dataset from the digital library Sowiport and the recommendation-as-a-service provider Mr. DLib. The dataset contains information about 57.4 million recommendations that were displayed to the users of Sowiport. Information includes details on which recommendation approaches were used (e.g. content-based Read more…

Mr. DLib v1.1 released: JavaScript Client, 15 million CORE documents, new URL for recommendations-as-a-service via title search

We are proud to announce version 1.1 of Mr. DLib’s Recommender-System as-a-Service. The major new features are: A JavaScript Client to request recommendations from Mr. DLib. The JavaScript offers many advantages compared to a server-side processing of our recommendations. Among others, the main page will load faster while recommendations are requested in the Read more…

Paper accepted at ISI conference in Berlin: “Stereotype and Most-Popular Recommendations in the Digital Library Sowiport”

Our paper titled “Stereotype and Most-Popular Recommendations in the Digital Library Sowiport” is accepted for publication at the 15th International Symposium on Information Science (ISI) in Berlin. Abstract: Stereotype and most-popular recommendations are widely neglected in the research-paper recommender-system and digital-library community. In other domains such as movie recommendations and hotel Read more…

Enhanced re-ranking in our recommender system based on Mendeley’s readership statistics

Content-based filtering recommendations suffer from the problem that no human quality assessments are taken into account. This means a poorly written paper ppoor would be considered equally relevant for a given input paper pinput as high-quality paper pquality if pquality and ppoor contain the same words. We elevate for this problem by using Mendeley’s readership data Read more…

Server status of Mr. DLib’s recommender system publicly available

Our servers for Mr. DLib’s recommender-system as-a-service (RaaS) are now monitored by UptimeRobot, a free monitoring service. You can access all our RaaS server statuses at this URL https://stats.uptimerobot.com/WLL5PUjN6 and you will see a dashboard like this: A click on one of the server names will show you more details, e.g. https://stats.uptimerobot.com/WLL5PUjN6/778037437

New recommendation algorithms integrated to Mr. DLib’s recommender system

We have integrated several new recommendation algorithms into Mr. DLib. Some recommendation algorithms are only ought as baselines for our researchers, others hopefully will further increase the effectiveness of Mr. DLib. Overall, Mr. DLib now uses the following recommendation algorithms in its recommender system: Random Recommendations The approach recommendation randomly picks Read more…

First Pilot Partner (GESIS’ Sowiport) Integrates Mr. DLib’s Recommendations as-a-Service

We are proud to announce that the social science portal Sowiport is using Mr. DLib´s recommender-system as-a-service as first pilot partner. Sowiport pools and links quality information from domestic and international providers, making it available in one place. Sowiport currently contains 9.5 million references on publications and research projects. The Read more…

Docear 1.0.3 Beta: rate recommendation, new web interface, bug fixes, …

Update: February 18, 2014: No bugs were reported, as such we declare Docear 1.03 with its recommender system as stable. It can be downloaded on the normal download page.


With Docear 1.0.3 beta we have improved PDF handling, the recommender system, provided some help for new users and enhanced the way how you can access your mind maps online.

PDF Handling

We fixed several minor bugs with regard to PDF handling. In previous versions of Docear, nested PDF bookmarks were imported twice when you drag & dropped a PDF file to the mind map. Renaming PDF files from within Docear changed the file links in your mind maps but did not change them in your BibTeX file. Both issues are fixed now. To rename a PDF file from within Docear you just have to right-click it in Docear’s workspace panel on the left hand side and it is important that the mind maps you have linked the file in, are opened. We know, this is still not ideal, and will improve this in future versions of Docear.

Rate Your Recommendations

You already know about our recommender system for academic literature. If you want to help us improving it, you can now rate how good a specific set of recommendations reflects your personal field of interest. Btw. it would be nice if you do not rate a set of recommendations negatively only because it contains some recommendations you received previously. Currently, we have no mechanism to detect duplicate recommendations.

rate a literature recommendation set

(more…)

New paper: “A Comparative Analysis of Offline and Online Evaluations and Discussion of Research Paper Recommender System Evaluation”

Yesterday, we published a pre-print on the shortcomings of current research-paper recommender system evaluations. One of the findings was that results of offline and online experiments sometimes contradict each other. We did a more detailed analysis on this issue and wrote a new paper about it. More specifically, we conducted a comprehensive evaluation of a set of recommendation algorithms using (a) an offline evaluation and (b) an online evaluation. Results of the two evaluation methods were compared to determine whether and when results of the two methods contradicted each other. Subsequently, we discuss differences and validity of evaluation methods focusing on research paper recommender systems. The goal was to identify which of the evaluation methods were most authoritative, or, if some methods are unsuitable in general. By ‘authoritative’, we mean which evaluation method one should trust when results of different methods contradict each other.

Bibliographic data: Beel, J., Langer, S., Genzmehr, M., Gipp, B. and Nürnberger, A. 2013. A Comparative Analysis of Offline and Online Evaluations and Discussion of Research Paper Recommender System Evaluation. Proceedings of the Workshop on Reproducibility and Replication in Recommender Systems Evaluation (RepSys) at the ACM Recommender System Conference (RecSys) (2013), 7–14.

Our current results cast doubt on the meaningfulness of offline evaluations. We showed that offline evaluations could often not predict results of online experiments (measured by click-through rate – CTR) and we identified two possible reasons.

The first reason for the lacking predictive power of offline evaluations is the ignorance of human factors. These factors may strongly influence whether users are satisfied with recommendations, regardless of the recommendation’s relevance. We argue that it probably will never be possible to determine when and how influential human factors are in practice. Thus, it is impossible to determine when offline evaluations have predictive power and when they do not. Assuming that the only purpose of offline evaluations is to predict results in real-world settings, the plausible consequence is to abandon offline evaluations entirely.

(more…)