Comprehensive Comparison of Reference Managers: Mendeley vs. Zotero vs. Docear

Which one is the best reference management software? That’s a question any student or researcher should think about quite carefully, because choosing the best reference manager may save lots of time and increase the quality of your work significantly. So, which reference manager is best? Zotero? Mendeley? Docear? …? The answer is: “It depends”, because different people have different needs. Actually, there is no such thing as the ‘best’ reference manager but only the reference manager that is best for you (even though some developers seem to believe that their tool is the only truly perfect one).

In this Blog-post, we compare Zotero, Mendeley, and Docear and we hope that the comparison helps you to decide which of the reference managers is best for you. Of course, there are many other reference managers. Hopefully, we can include them in the comparison some day, but for now we only have time to compare the three. We really tried to do a fair comparison, based on a list of criteria that we consider important for reference management software. Of course, the criteria are subjectively selected, as are all criteria by all reviewers, and you might not agree with all of them. However, even if you disagree with our evaluation, you might find at least some new and interesting aspects as to evaluate reference management tools. You are very welcome to share your constructive criticism in the comments, as well as links to other reviews. In addition, it should be obvious that we – the developers of Docear – are somewhat biased. However, this comparison is most certainly more objective than those that Mendeley and other reference managers did ;-).

Please note that we only compared about 50 high-level features and used a simple rating scheme in the summary table. Of course, a more comprehensive list of features and a more sophisticated rating scheme would have been nice, but this would have been too time consuming. So, consider this review as a rough guideline. If you feel that one of the mentioned features is particularly important to you, install the tools yourself, compare the features, and share your insights in the comments! Most importantly, please let us know when something we wrote is not correct. All reviewed reference tools offer lots of functions, and it might be that we missed one during our review.

Please note that the developers of all three tools constantly improve their tools and add new features. Therefore, the table might be not perfectly up-to-date. In addition, it’s difficult to rate a particular functionality with only one out of three possible ratings (yes; no; partly). Therefore, we highly suggest to read the detailed review, which explains the rationale behind the ratings.

The  table above provides an overview of how Zotero, Mendeley, and Docear support you in various tasks, how open and free they are, etc. Details on the features and ratings are provided in the following sections. As already mentioned, if you notice a mistake in the evaluation (e.g. missed a key feature), please let us know in the comments.

Overview

If you don’t want to read a lot, just jump to the summary

We believe that a reference manager should offer more features than simple reference management. It should support you in (1) finding literature, (2) organizing and annotating literature, (3) drafting your papers, theses, books, assignments, etc., (4) managing your references (of course), and (5) writing your papers, theses, etc. Additionally, many – but not all – students and researchers might be interested in (6) socializing and collaboration, (7) note, task, and general information management, and (8) file management. Finally, we think it is important that a reference manager (9) is available for the major operating systems, (10) has an information management approach you like (tables, social tags, search, …), and (11) is open, free, and sustainable (see also What makes a bad reference manager).

(more…)

Docear4Word 1.23 Released

The new Docear4Word v1.23 is out as Beta version. Changes are A more detailed error message when there is a parsing error in your BibTeX file. The latest v1.0.517 version of CiteProc-JS has been included. This should finally solve all the sorting and numbering issues. We made some adjustment that Read more…

Do a paid internship abroad at SciPlore – Summer 2014

The SciPlore team at Google HQ in Mountain View, CA

The SciPlore team at Google HQ in Mountain View, CA

Our partnering research group SciPlore, from which Docear evolved, in cooperation with the German Academic Exchange Service (DAAD) is offering a paid internship for a Bachelor student in the field of computer science. Prerequisite for applying is that you are a student studying at a German university (if you are from the US, UK, or Canada, read here). More details on prerequisites here.

SciPlore is an international team of researchers affiliated with the University of Magdeburg in Germany and  the University of California, Berkeley. As an intern, you will have the chance to spend 6-12 weeks abroad at a research institute collaborating with the SciPlore research team.
SciPlore researches novel approaches in citation and semantic text analysis for quantifying similarities between scientific articles. Similarity assessments are crucial to many Information Retrieval (IR) tasks, such as clustering of documents, recommending academic literature, or automatically detecting plagiarism.

(more…)

Elsevier (i.e. the owner of Mendeley) “asks” the users of Academia.edu (i.e. a competitor of Mendeley) to take their papers down

A week ago, Elsevier sent messages to some users of Academia.edu, a social network for researchers (Source: Chronicle). Elsevier asked these users to remove some of their papers from their profile page at Academia.edu. Apparently, Elsevier wasn’t happy that the authors published papers that Elsevier holds the publishing rights for. It’s an interesting discussion whether Elsevier has the right to prohibit uploading papers on Academia’s profile page, because authors have the right to publish their articles on their private homepages. Now, authors might argue that their Academia.edu profile is their private homepage.

What is even more interesting is the fact that it’s Elsevier who did this. That is the same company that recently bought the reference manger Mendeley, which, coincidentally, also offers a social network and hence is a competitor of Academia.edu. I wonder, if Elsevier will soon start to send messages to Mendeley users telling them, too, to not  upload their papers to their profile pages. Or, if Elsevier will just send these messages to users of social networks such as Academia.edu and Researchgate to strenghten their own product Mendeley. Either way, it’s not a nice move from Elsevier and confirms the negative attitude that many researchers have against this publisher and it brings back the doubt about Mendeley’s openness.

Some more detailed discussions on this topic can be found here:

(more…)

Docear 1.02 Beta: Serious PDF Bug Fix; added a donation button

We discovered a serious bug in Docear that relates to the PDF management. In some situations, it could happen that when you edited a PDF, the annotation IDs were not recognized correctly, and a conflict was shown. We fixed this bug and publish Docear 1.02 as a beta version today. Right now, the Beta version download is only available in our forum. We would appreciate if you could test the new version. If there are no more serious bugs found, we will publish it as stable version without any further notifications.

We also added a “Please Donate” note to the workspace panel. It leads you to our donation page and you are sincerely invited to make use of that page :-). If you have already donated, if you just don’t want to donate, or if you need every pixel in the workspace, do a right-click on that note and you will be able to hide it. In addition, we also changed the welcome page that opens after you have installed Docear.

New “Please donate” note in Docear

New “Welcome” page 

(more…)

Docear 1.01 with some minor improvements and bug fixes

A few days ago we released the experimental version of Docear and wrote about it in our experimental release forum (you can subscribe to that forum if you want to be informed about new experimental releases). Today we declare Docear 1.01 as stable and from now on it’s available on our primary download page. Changes are rather minor. 

Enhancements include

  • A slightly modified dialog for selecting your PDF viewer (some links were updated)
  • The labeling of the file monitoring settings are now more uniform
  • The colors for “Move …” in the “Nodes” ribbon were changed from green to blue. There’s quite a funny story behind it. One of our team members recently told me that the arrows for moving nodes would point to the wrong direction. I told him that they were absolutely correct and we had quite a discussion. Then we realized that the team member is (red-green) color blind and couldn’t recognize the green arrows properly. Well, now the arrows are blue (see screenshot) and all people should be able to recognize them correctly 🙂

In addition, we did some bug fixes.

(more…)

Searching and filtering via 2-dimensional tags (i.e. attributes): One of Docear’s most powerful features

One of Docear’s most unique feature is its “single-section” user-interface, which allows a highly effective organization of your PDFs, references, and notes. When you want to look-up some information you browse through your data, and usually you should be able to find what you are looking for quite fast. However, sometimes browsing your data is not ideal and you want to search over the papers’ full-text or meta data (title, author, …), or you want to use (social) tags to classify your papers.

Unfortunately, there is one problem about tags: they are one-dimensional. Imagine, you wanted to do a literature survey about recommender systems, and you have dozens of papers about this topic. Some of the papers’ authors evaluated their recommender system with user studies, some with offline experiments, and some with online experiments. The user studies were conducted with with different amounts of participants e.g. one study was conducted with 20 participants, one with 43, and one with 68. With social tags it would be difficult to represent this information. Of course, you could easily add the tag “recommender system” to each of your papers, but how about reflecting the evaluation type? Would you want to create different tags for each evaluation type, i.e. evaluation_user-study, evaluation_offline, evaluation_online? You might do this, but in situations with more than three options this approach would become confusing. You definitely run into a problem when you want to store the amount of study participants via social tags. This simply wouldn’t be possible except maybe you would create tags like no_of_participants_1-10, no_of_participants_11-50, etc.

What you would want to have are “2-dimensional” tags, i.e. one dimension for adding e.g. the tag “evaluation_type” to a paper and one dimension for specifying which evaluation type it is (e.g. “offline evaluation”). In Docear, there are two-dimensional tags, i.e. attribute-value pairs, and these attributes give you much more power than social tags. Here is, how it works:

(more…)

Docear’s users donate $434 in two years (i.e. ~4 Cent per user)

As you probably know, Docear is free and open source. As you might know as well, we do accept donations. Today, we would like to share some statistics with you about the amount of donations we received. Actually, in the past two years, we received 434 US$ (~340€) from from 33 donators. That’s not a lot, given that Docear has several thousands of active users. However, it’s also no surprise and to be honest, we ourselves hardly ever donate for other software tools, so we cannot blame anyone for not donating to Docear (even if he should heavily use it).

The average donation we received was 13.16$ (median was 10$), the highest donation was 50$, the smallest 1$, standard deviation 11.04$. The following chart shows the individual and cumulated donations. Sometimes, we don’t receive any recommendations for several month, sometimes we get multiple ones within a week or so.

(more…)

On the popularity of reference managers, and their rise and fall

This weekend, I had some spare time and I wondered which was the most popular reference manager (and how Docear is doing in comparison). So, I took a list of reference managers from Wikipedia, and checked some statistics on Alexa, Google Trends, and Google Keyword Planner. Since I had the data anyway, I thought I share it with you :-). Please note that this is a quick and dirty analysis. I cannot guarantee that there is not one or two reference managers missing (i just took the list from Wikipedia), and, of course, there are many alternatives to Alexa and Google for measuring the popularity of a reference manager.

(more…)

Docear 1.0 (stable), a new video, new manual, new homepage, new details page, …

Today, Docear 1.0 (stable) is finally available for Windows, Mac, and Linux to ico16download. It’s been almost two years since we released the first private Alpha of Docear and we are really proud of what we accomplished since then. Docear is better than ever, and in addition to all the enhancements we made during the past years, we completely rewrote the manual with step-by-step instructions including an overview of supported PDF viewers, we changed the homepage, we created a new video, and we made the features & details page much more comprehensive. For those who already use Docear 1.0 RC4, there are not many changes (just a few bug fixes). For new users, we would like to explain what Docear is and what makes it so special.

Docear is a unique solution to academic literature management that helps you to organize, create, and discover academic literature. The three most distinct features of Docear are:

  1. A single-section user-interface that differs significantly from the interfaces you know from Zotero, JabRef, Mendeley, Endnote, … and that allows a more comprehensive organization of your electronic literature (PDFs) and the annotations you created (i.e highlighted text, comments, and bookmarks).
  2. A ‘literature suite concept’  that allows you to draft and write your own assignments, papers, theses, books, etc. based on the annotations you previously created.
  3. A research paper recommender system that allows you to discover new academic literature.

Aside from Docear’s unique approach, Docear offers many features more. In particular, we would like to point out that Docear is free, open source, not evil, and Docear gives you full control over your data. Docear works with standard PDF annotations, so you can use your favorite PDF viewer. Your reference data is directly stored as BibTeX (a text-based format that can be read by almost any other reference manager). Your drafts and folders are stored in Freeplane’s XML format, again a text-based format that is easy to process and understood by several other applications. And although we offer several online services such as PDF metadata retrieval, backup space, and online viewer, we do not force you to register. You can just install Docear on your computer, without any registration, and use 99% of Docear’s functionality.

But let’s get back to Docear’s unique approach for literature management…

(more…)

What makes a bad reference manager?

Update 2013-11-11: For some statistical data read On the popularity of reference managers, and their rise and fall
Update 2014-01-15: For a detailed review of Docear and other tools, read Comprehensive Comparison of Reference Managers: Mendeley vs. Zotero vs. Docear

At time of writing these lines, there are 31 reference management tools listed on Wikipedia and there are many attempts to identify the best ones, or even the best one (e.g. here, here, here, here, here, here, here, here, … [1]). Typically, reviewers gather a list of features and analyze which reference managers offer most of these features, and hence are the best ones. Unfortunately, each reviewer has its own preferences about which features are important, and so have you: Are many export formats more important than a mobile version? Is it more important to have metadata extraction for PDF files than an import for bibliographic data from academic search engines? Would a thorough manual be more important than free support? How important is a large number of citation styles? Do you need a Search & Replace function? Do you want to create synonyms for term lists (whatever that means)? …?

Let’s face the truth: it’s impossible to determine which of the hundred potential features you really need.

So how can you find the best reference manager? Recently we had an ironic look at the question what the best reference managers are. Today we want to have a more serious analysis, and propose to first identify the bad reference managers, instead of looking for the very best ones. Then, if the bad references managers are found, it should be easier to identify the best one(s) from the few remaining.

What makes a bad – or evil –  reference manager? We believe that there are three no-go ‘features’ that make a reference manager so bad (i.e. so harming in the long run) that you should not use it, even if it possesses all the other features you might need.

1. A “lock-in feature” that prevents you from ever switching to a competitor tool 

A reference manager might offer exactly the features you need, but how about in a few years? Maybe your needs are changing, other reference managers are just becoming better than your current tool, or your boss is telling you that you have to use a specific tool. In this case it is crucial that your current reference manager doesn’t lock you in and allows switching to your new favorite reference managers. Otherwise, you will have a serious problem. You might have had the perfect reference manager for the past one or two years. But then you are bound to the now not-so-perfect tool for the rest of your academic life. To being able to switch to another reference manager, your reference manager should be offering at least one of the following three functions (ideally the first one).

  1. Your data should be stored in a standard format that other reference managers can read
  2. Your reference manager should be able to export your data in a standard format
  3. Your reference manager allows direct access to your data, so other developers can write import filters for it.

(more…)

Photos from the TPDL 2013

The 17th International Conference on Digital Libraries (TPDL2013) is almost over. There were many interesting presentations, great weather, and awesome food :-). I took some pictures, that you also find on Facebook, G+, or as a single file download on Dropbox.

New paper: “A Comparative Analysis of Offline and Online Evaluations and Discussion of Research Paper Recommender System Evaluation”

Yesterday, we published a pre-print on the shortcomings of current research-paper recommender system evaluations. One of the findings was that results of offline and online experiments sometimes contradict each other. We did a more detailed analysis on this issue and wrote a new paper about it. More specifically, we conducted a comprehensive evaluation of a set of recommendation algorithms using (a) an offline evaluation and (b) an online evaluation. Results of the two evaluation methods were compared to determine whether and when results of the two methods contradicted each other. Subsequently, we discuss differences and validity of evaluation methods focusing on research paper recommender systems. The goal was to identify which of the evaluation methods were most authoritative, or, if some methods are unsuitable in general. By ‘authoritative’, we mean which evaluation method one should trust when results of different methods contradict each other.

Bibliographic data: Beel, J., Langer, S., Genzmehr, M., Gipp, B. and Nürnberger, A. 2013. A Comparative Analysis of Offline and Online Evaluations and Discussion of Research Paper Recommender System Evaluation. Proceedings of the Workshop on Reproducibility and Replication in Recommender Systems Evaluation (RepSys) at the ACM Recommender System Conference (RecSys) (2013), 7–14.

Our current results cast doubt on the meaningfulness of offline evaluations. We showed that offline evaluations could often not predict results of online experiments (measured by click-through rate – CTR) and we identified two possible reasons.

The first reason for the lacking predictive power of offline evaluations is the ignorance of human factors. These factors may strongly influence whether users are satisfied with recommendations, regardless of the recommendation’s relevance. We argue that it probably will never be possible to determine when and how influential human factors are in practice. Thus, it is impossible to determine when offline evaluations have predictive power and when they do not. Assuming that the only purpose of offline evaluations is to predict results in real-world settings, the plausible consequence is to abandon offline evaluations entirely.

(more…)

New pre-print: “Research Paper Recommender System Evaluation: A Quantitative Literature Survey”

As you might know, Docear has a recommender system for research papers, and we are putting a lot of effort in the improvement of the recommender system. Actually, the development of the recommender system is part of my PhD research. When I began my work on the recommender system, some years ago, I became quite frustrated because there were so many different approaches for recommending research papers, but I had no clue which one would be most promising for Docear. I read many many papers (far more than 100), and although there were many interesting ideas presented in the papers, the evaluations… well, most of them were poor. Consequently, I did just not know which approaches to use in Docear.

Meanwhile, we reviewed all these papers more carefully and analyzed how exactly authors conducted their evaluations. More precisely, we analyzed the papers for the following questions.

  1. To what extent do authors perform user studies, online evaluations, and offline evaluations?
  2. How many participants do user studies have?
  3. Against which baselines are approaches compared?
  4. Do authors provide information about algorithm’s runtime and computational complexity?
  5. Which metrics are used for algorithm evaluation, and do different metrics provide similar rankings of the algorithms?
  6. Which datasets are used for offline evaluations
  7. Are results comparable among different evaluations based on different datasets?
  8. How consistent are online and offline evaluations? Do they provide the same, or at least similar, rankings of the evaluated approaches?
  9. Do authors provide sufficient information to re-implement their algorithms or replicate their experiments?

(more…)

Docear 1.0 RC3: Improved monitoring concept, neater GUI, and many bug fixes

Today we released RC3 (Release Candidate) of Docear 1.0 (not yet on the official download page but here in the Blog only). It has one major change compared to previous Docear versions, namely we got rid of the “Incoming” mind map. In the past, most users never really got used to the idea why there was an ‘Incoming’ mind map and a ‘Literature & Annotations’ mind map. Now, there is only the ‘Literature & Annotations’ mind map but it has a special “incoming” node in which new PDFs are added. We hope that this concept is easier to understand. It also means that when you move a PDF from the incoming node to any other node in the mind map, and you create new annotations in the PDF, the new annotations are directly added to the PDF node in the mind map and there won’t be any new node in the incoming node. However, if you prefer the old concept, don’t worry. You can keep your old incoming mind map and use Docear as you were used to be.

(more…)

Which one is the best reference management software?

Update 2013-10-14: For a more serious analysis read What makes a bad reference manager?
Update 2013-11-11: For some statistical data read On the popularity of reference managers, and their rise and fall
Update 2014-01-15: For a detailed review, read Comprehensive Comparison of Reference Managers: Mendeley vs. Zotero vs. Docear

<irony>Have you ever wondered what the best reference management software is? Well, today I found the answer on RefWorks’ web site: The best reference manager is RefWorks! Look at the picture below. It might be a little bit confusing but we did the math: Refworks is best and beats EndNote, EndNote Web, Reference Manager, Zotero, and Mendeley in virtually all categories.

Comparison of reference management software - Refworks is the best reference manager

Source: RefWorks

(more…)

Who wants to develop Docear4LibreOffice or Docear4OpenOffice?

Docear4LibreOffice and Docear4OpenOfficeA few months ago we released Docear4Word. Docear4Word is an add-on for Microsoft Word that allows you to insert and format citations and bibliographies very easily in MS Word. Many of our users love Docear4Word. However, not all of our users are using Microsoft Word but many are using OpenOffice or LibreOffice. One of them is Stephen from Uberstudent which is a Linux distribution for learners. Stephen, as many others, urged us to develop an add-on, comparable to Docear4Word, for Libre of OpenOffice. Unfortunately, we don’t have the expertise to do this.

Therefore, we would like you to help us. Do you have experience in developing add-ons for LibreOffice and/or OpenOffice? Then, please contact us. We have prepared a description of what Docear4Libre/OpenOffice should be able to do. Read it carefully, and tell us how long you would need to implement it. And don’t forget to tell us how much money you would want for it. Exactly, we are not expecting you to do it for free. We would be willing to pay something for it. Once we found an appropriate developer we will ask our users to donate for Docear4Libre/OpenOffice and give a good amount ourselves. Also Stephen will ask the users from Uberstudent to donate.

(more…)

Docear 1.0 (RC2) available with many bug fixes and better support for MacOS PDF viewers

There is a new version of Docear available for download. It’s basically the (experimental) RC1 version done right. RC2 fixes a lot of bugs that were caused by the new workspace model with multiple projects, it features a refined and polished version of the Ribbon, fixes a lot of bugs in general and supports the standard PDF viewers of MacOSX (Preview and Skim) and probably a lot of other viewers as well!

If you are still using Beta9 of Docear, a lot of things will change and improve with this new version of Docear. However converting your old maps to this new format is a one-way process (you can’t use these files with Beta9 of Docear anymore) and the process itself might take some time, depending on the size of your mind maps. Please backup your files before upgrading to Docear RC2. 

new icons

Some of Docear’s new icons in the ribbon bar

(more…)