Four of our poster and demo submissions got accepted for presentation at the 13th ACM Recommender Systems Conference (RecSys 2019) in Copenhagen in September. The accepted submissions are as follows (pre-prints will follow soon):

Darwin & Goliath: A White-Label Recommender-System As-a-Service with Automated Algorithm-Selection

Joeran Beel, Alan Griffin, Conor O’Shea

Abstract. Recommendations-as-a-Service (RaaS) ease the process for small and medium-sized enterprises (SMEs) to offer product recommendations to their customers. Current RaaS, however, suffer from a one-size-fits-all concept, i.e. they apply the same recommendation algorithm for all SMEs. We introduce Darwin & Goliath, a RaaS that features multiple recommendation frameworks (Apache Lucene, TensorFlow, …), and identifies the ideal algorithm for each SME automatically. Darwin & Goliath further offers per-instance algorithm selection and a white label feature that allows SMEs to offer a RaaS under their own brand. Since November 2018, Darwin & Goliath has delivered more than one million recommendations.

ACM Recommender Systems Conference 2019

AnnoMathTeX- a Formula Identifier Annotation Recommender System for STEM Documents

Philipp Scharpf, Ian Mackerracher, Moritz Schubotz, Joeran Beel, Corinna Breitinger, Bela Gipp

Abstract. Documents from science, technology, engineering, and mathematics (STEM) often contain a large number of mathematical formulae alongside text. Semantic search, recommender, and question answering systems require the occurring formula constants and variables (identifiers) to be disambiguated. We present the first implementation of a recommender system that enables and accelerates formula annotation by displaying the most likely candidates for formula and identifier names from four different sources (arXiv, Wikipedia, Wikidata, or the surrounding text). A first evaluation shows that in total,78% of the formula identifier name recommendations were accepted by the user as a suitable annotation. Furthermore, document-wide annotation saved the user the annotation oftentimes more other identifier occurrences. Our long-term vision is to integrate the annotation recommender into the edit-view of Wikipedia and the online LaTeX editor Overleaf.

Data Pruning in Recommender Systems Research: Best-Practice or Malpractice?

Joeran Beel, Victor Brunnel

Abstract. Many recommender-system datasets are pruned, i.e. some data is removed that wouldn’t be removed in a production recommender-system. For instance, the MovieLens datasets 100k, 1m, etc. contain only data from users who rated 20 or more movies. Similarly, some researchers prune data themselves and conduct their experiments only on subsets of the original data. We conduct a study to find out how often pruned datasets are used, and what the effect of data pruning is. We find that 48% of researchers used at least one pruned dataset for their research, and 4% pruned data themselves. MovieLens is the most used dataset (29%) and can be considered as a defacto standard dataset. Based on MovieLens, we found that removing users with less than 20 ratings means that 5% of ratings are ignored, and 38% of users. Performance differs widely for different user groups. Users with less than 20 ratings have an RMSE of 1.03 on average, i.e. 23% worse than users with 20+ ratings (0.84). Ignoring these users may not be always ideal. We discuss the results and conclude that pruning should be avoided, if possible, though more discussion in the community is needed.

BERT, ELMo, USE, and InferSent Sentence Encoders: The Panacea for Research-Paper Recommendations?

Hebatallah A. Mohamed Hassan, Giuseppe Sansonetti, Fabio Gasparetti, Alessandro Micarelli, Joeran Beel

Abstract. Content-based approaches to research paper recommendation are important when user feedback is sparse or not available. The task of content-based matching is challenging, mainly due to the problem of determining the semantic similarity of texts. Nowadays, there exist many sentence embedding models that learn deep semantic representations by being trained on huge corpora, aiming to provide transfer learning to a wide variety of natural language processing tasks. In this work, we present a comparative evaluation among five well-known pre-trained sentence encoders deployed in the pipeline of title-based research paper recommendation. The experimented encoders are USE, BERT, InferSent, ELMo, and SciBERT. For our study, we propose a methodology for evaluating such models in reranking BM25-based recommendations. The experimental results show that the sole consideration of semantic information from these encoders does not lead to improved recommendation performance over the traditional BM25 technique, while their integration enables the retrieval of a set of relevant papers that may not be retrieved by the BM25 ranking function.


Joeran Beel

Please visit https://isg.beel.org/people/joeran-beel/ for more details about me.

0 Comments

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *