Searching and filtering via 2-dimensional tags (i.e. attributes): One of Docear’s most powerful features

One of Docear’s most unique feature is its “single-section” user-interface, which allows a highly effective organization of your PDFs, references, and notes. When you want to look-up some information you browse through your data, and usually you should be able to find what you are looking for quite fast. However, sometimes browsing your data is not ideal and you want to search over the papers’ full-text or meta data (title, author, …), or you want to use (social) tags to classify your papers.

Unfortunately, there is one problem about tags: they are one-dimensional. Imagine, you wanted to do a literature survey about recommender systems, and you have dozens of papers about this topic. Some of the papers’ authors evaluated their recommender system with user studies, some with offline experiments, and some with online experiments. The user studies were conducted with with different amounts of participants e.g. one study was conducted with 20 participants, one with 43, and one with 68. With social tags it would be difficult to represent this information. Of course, you could easily add the tag “recommender system” to each of your papers, but how about reflecting the evaluation type? Would you want to create different tags for each evaluation type, i.e. evaluation_user-study, evaluation_offline, evaluation_online? You might do this, but in situations with more than three options this approach would become confusing. You definitely run into a problem when you want to store the amount of study participants via social tags. This simply wouldn’t be possible except maybe you would create tags like no_of_participants_1-10, no_of_participants_11-50, etc.

What you would want to have are “2-dimensional” tags, i.e. one dimension for adding e.g. the tag “evaluation_type” to a paper and one dimension for specifying which evaluation type it is (e.g. “offline evaluation”). In Docear, there are two-dimensional tags, i.e. attribute-value pairs, and these attributes give you much more power than social tags. Here is, how it works:

(more…)

Docear’s users donate $434 in two years (i.e. ~4 Cent per user)

As you probably know, Docear is free and open source. As you might know as well, we do accept donations. Today, we would like to share some statistics with you about the amount of donations we received. Actually, in the past two years, we received 434 US$ (~340€) from from 33 donators. That’s not a lot, given that Docear has several thousands of active users. However, it’s also no surprise and to be honest, we ourselves hardly ever donate for other software tools, so we cannot blame anyone for not donating to Docear (even if he should heavily use it).

The average donation we received was 13.16$ (median was 10$), the highest donation was 50$, the smallest 1$, standard deviation 11.04$. The following chart shows the individual and cumulated donations. Sometimes, we don’t receive any recommendations for several month, sometimes we get multiple ones within a week or so.

(more…)

On the popularity of reference managers, and their rise and fall

This weekend, I had some spare time and I wondered which was the most popular reference manager (and how Docear is doing in comparison). So, I took a list of reference managers from Wikipedia, and checked some statistics on Alexa, Google Trends, and Google Keyword Planner. Since I had the data anyway, I thought I share it with you :-). Please note that this is a quick and dirty analysis. I cannot guarantee that there is not one or two reference managers missing (i just took the list from Wikipedia), and, of course, there are many alternatives to Alexa and Google for measuring the popularity of a reference manager.

(more…)

Docear 1.0 (stable), a new video, new manual, new homepage, new details page, …

Today, Docear 1.0 (stable) is finally available for Windows, Mac, and Linux to ico16download. It’s been almost two years since we released the first private Alpha of Docear and we are really proud of what we accomplished since then. Docear is better than ever, and in addition to all the enhancements we made during the past years, we completely rewrote the manual with step-by-step instructions including an overview of supported PDF viewers, we changed the homepage, we created a new video, and we made the features & details page much more comprehensive. For those who already use Docear 1.0 RC4, there are not many changes (just a few bug fixes). For new users, we would like to explain what Docear is and what makes it so special.

Docear is a unique solution to academic literature management that helps you to organize, create, and discover academic literature. The three most distinct features of Docear are:

  1. A single-section user-interface that differs significantly from the interfaces you know from Zotero, JabRef, Mendeley, Endnote, … and that allows a more comprehensive organization of your electronic literature (PDFs) and the annotations you created (i.e highlighted text, comments, and bookmarks).
  2. A ‘literature suite concept’  that allows you to draft and write your own assignments, papers, theses, books, etc. based on the annotations you previously created.
  3. A research paper recommender system that allows you to discover new academic literature.

Aside from Docear’s unique approach, Docear offers many features more. In particular, we would like to point out that Docear is free, open source, not evil, and Docear gives you full control over your data. Docear works with standard PDF annotations, so you can use your favorite PDF viewer. Your reference data is directly stored as BibTeX (a text-based format that can be read by almost any other reference manager). Your drafts and folders are stored in Freeplane’s XML format, again a text-based format that is easy to process and understood by several other applications. And although we offer several online services such as PDF metadata retrieval, backup space, and online viewer, we do not force you to register. You can just install Docear on your computer, without any registration, and use 99% of Docear’s functionality.

But let’s get back to Docear’s unique approach for literature management…

(more…)

What makes a bad reference manager?

Update 2013-11-11: For some statistical data read On the popularity of reference managers, and their rise and fall
Update 2014-01-15: For a detailed review of Docear and other tools, read Comprehensive Comparison of Reference Managers: Mendeley vs. Zotero vs. Docear

At time of writing these lines, there are 31 reference management tools listed on Wikipedia and there are many attempts to identify the best ones, or even the best one (e.g. here, here, here, here, here, here, here, here, … [1]). Typically, reviewers gather a list of features and analyze which reference managers offer most of these features, and hence are the best ones. Unfortunately, each reviewer has its own preferences about which features are important, and so have you: Are many export formats more important than a mobile version? Is it more important to have metadata extraction for PDF files than an import for bibliographic data from academic search engines? Would a thorough manual be more important than free support? How important is a large number of citation styles? Do you need a Search & Replace function? Do you want to create synonyms for term lists (whatever that means)? …?

Let’s face the truth: it’s impossible to determine which of the hundred potential features you really need.

So how can you find the best reference manager? Recently we had an ironic look at the question what the best reference managers are. Today we want to have a more serious analysis, and propose to first identify the bad reference managers, instead of looking for the very best ones. Then, if the bad references managers are found, it should be easier to identify the best one(s) from the few remaining.

What makes a bad – or evil –  reference manager? We believe that there are three no-go ‘features’ that make a reference manager so bad (i.e. so harming in the long run) that you should not use it, even if it possesses all the other features you might need.

1. A “lock-in feature” that prevents you from ever switching to a competitor tool 

A reference manager might offer exactly the features you need, but how about in a few years? Maybe your needs are changing, other reference managers are just becoming better than your current tool, or your boss is telling you that you have to use a specific tool. In this case it is crucial that your current reference manager doesn’t lock you in and allows switching to your new favorite reference managers. Otherwise, you will have a serious problem. You might have had the perfect reference manager for the past one or two years. But then you are bound to the now not-so-perfect tool for the rest of your academic life. To being able to switch to another reference manager, your reference manager should be offering at least one of the following three functions (ideally the first one).

  1. Your data should be stored in a standard format that other reference managers can read
  2. Your reference manager should be able to export your data in a standard format
  3. Your reference manager allows direct access to your data, so other developers can write import filters for it.

(more…)

New paper: “A Comparative Analysis of Offline and Online Evaluations and Discussion of Research Paper Recommender System Evaluation”

Yesterday, we published a pre-print on the shortcomings of current research-paper recommender system evaluations. One of the findings was that results of offline and online experiments sometimes contradict each other. We did a more detailed analysis on this issue and wrote a new paper about it. More specifically, we conducted a comprehensive evaluation of a set of recommendation algorithms using (a) an offline evaluation and (b) an online evaluation. Results of the two evaluation methods were compared to determine whether and when results of the two methods contradicted each other. Subsequently, we discuss differences and validity of evaluation methods focusing on research paper recommender systems. The goal was to identify which of the evaluation methods were most authoritative, or, if some methods are unsuitable in general. By ‘authoritative’, we mean which evaluation method one should trust when results of different methods contradict each other.

Bibliographic data: Beel, J., Langer, S., Genzmehr, M., Gipp, B. and Nürnberger, A. 2013. A Comparative Analysis of Offline and Online Evaluations and Discussion of Research Paper Recommender System Evaluation. Proceedings of the Workshop on Reproducibility and Replication in Recommender Systems Evaluation (RepSys) at the ACM Recommender System Conference (RecSys) (2013), 7–14.

Our current results cast doubt on the meaningfulness of offline evaluations. We showed that offline evaluations could often not predict results of online experiments (measured by click-through rate – CTR) and we identified two possible reasons.

The first reason for the lacking predictive power of offline evaluations is the ignorance of human factors. These factors may strongly influence whether users are satisfied with recommendations, regardless of the recommendation’s relevance. We argue that it probably will never be possible to determine when and how influential human factors are in practice. Thus, it is impossible to determine when offline evaluations have predictive power and when they do not. Assuming that the only purpose of offline evaluations is to predict results in real-world settings, the plausible consequence is to abandon offline evaluations entirely.

(more…)

New pre-print: “Research Paper Recommender System Evaluation: A Quantitative Literature Survey”

As you might know, Docear has a recommender system for research papers, and we are putting a lot of effort in the improvement of the recommender system. Actually, the development of the recommender system is part of my PhD research. When I began my work on the recommender system, some years ago, I became quite frustrated because there were so many different approaches for recommending research papers, but I had no clue which one would be most promising for Docear. I read many many papers (far more than 100), and although there were many interesting ideas presented in the papers, the evaluations… well, most of them were poor. Consequently, I did just not know which approaches to use in Docear.

Meanwhile, we reviewed all these papers more carefully and analyzed how exactly authors conducted their evaluations. More precisely, we analyzed the papers for the following questions.

  1. To what extent do authors perform user studies, online evaluations, and offline evaluations?
  2. How many participants do user studies have?
  3. Against which baselines are approaches compared?
  4. Do authors provide information about algorithm’s runtime and computational complexity?
  5. Which metrics are used for algorithm evaluation, and do different metrics provide similar rankings of the algorithms?
  6. Which datasets are used for offline evaluations
  7. Are results comparable among different evaluations based on different datasets?
  8. How consistent are online and offline evaluations? Do they provide the same, or at least similar, rankings of the evaluated approaches?
  9. Do authors provide sufficient information to re-implement their algorithms or replicate their experiments?

(more…)

Docear 1.0 RC3: Improved monitoring concept, neater GUI, and many bug fixes

Today we released RC3 (Release Candidate) of Docear 1.0 (not yet on the official download page but here in the Blog only). It has one major change compared to previous Docear versions, namely we got rid of the “Incoming” mind map. In the past, most users never really got used to the idea why there was an ‘Incoming’ mind map and a ‘Literature & Annotations’ mind map. Now, there is only the ‘Literature & Annotations’ mind map but it has a special “incoming” node in which new PDFs are added. We hope that this concept is easier to understand. It also means that when you move a PDF from the incoming node to any other node in the mind map, and you create new annotations in the PDF, the new annotations are directly added to the PDF node in the mind map and there won’t be any new node in the incoming node. However, if you prefer the old concept, don’t worry. You can keep your old incoming mind map and use Docear as you were used to be.

(more…)

Which one is the best reference management software?

Update 2013-10-14: For a more serious analysis read What makes a bad reference manager?
Update 2013-11-11: For some statistical data read On the popularity of reference managers, and their rise and fall
Update 2014-01-15: For a detailed review, read Comprehensive Comparison of Reference Managers: Mendeley vs. Zotero vs. Docear

<irony>Have you ever wondered what the best reference management software is? Well, today I found the answer on RefWorks’ web site: The best reference manager is RefWorks! Look at the picture below. It might be a little bit confusing but we did the math: Refworks is best and beats EndNote, EndNote Web, Reference Manager, Zotero, and Mendeley in virtually all categories.

Comparison of reference management software - Refworks is the best reference manager

Source: RefWorks

(more…)

Who wants to develop Docear4LibreOffice or Docear4OpenOffice?

Docear4LibreOffice and Docear4OpenOfficeA few months ago we released Docear4Word. Docear4Word is an add-on for Microsoft Word that allows you to insert and format citations and bibliographies very easily in MS Word. Many of our users love Docear4Word. However, not all of our users are using Microsoft Word but many are using OpenOffice or LibreOffice. One of them is Stephen from Uberstudent which is a Linux distribution for learners. Stephen, as many others, urged us to develop an add-on, comparable to Docear4Word, for Libre of OpenOffice. Unfortunately, we don’t have the expertise to do this.

Therefore, we would like you to help us. Do you have experience in developing add-ons for LibreOffice and/or OpenOffice? Then, please contact us. We have prepared a description of what Docear4Libre/OpenOffice should be able to do. Read it carefully, and tell us how long you would need to implement it. And don’t forget to tell us how much money you would want for it. Exactly, we are not expecting you to do it for free. We would be willing to pay something for it. Once we found an appropriate developer we will ask our users to donate for Docear4Libre/OpenOffice and give a good amount ourselves. Also Stephen will ask the users from Uberstudent to donate.

(more…)

Docear 1.0 (RC2) available with many bug fixes and better support for MacOS PDF viewers

There is a new version of Docear available for download. It’s basically the (experimental) RC1 version done right. RC2 fixes a lot of bugs that were caused by the new workspace model with multiple projects, it features a refined and polished version of the Ribbon, fixes a lot of bugs in general and supports the standard PDF viewers of MacOSX (Preview and Skim) and probably a lot of other viewers as well!

If you are still using Beta9 of Docear, a lot of things will change and improve with this new version of Docear. However converting your old maps to this new format is a one-way process (you can’t use these files with Beta9 of Docear anymore) and the process itself might take some time, depending on the size of your mind maps. Please backup your files before upgrading to Docear RC2. 

new icons

Some of Docear’s new icons in the ribbon bar

(more…)

Preview of Docear’s (Web) Collaborative Mind Mapping Tool to be presented at HTW in Berlin

Since March, Docear offers a simple web-based mind map viewer, developed with some of our volunteering students, and supported by the Freeplane team. On next Friday, July 12th, at 10:30am the students will present their final work at the HTW in Berlin. You are sincerely invited to join the presentation and be first to see Docear’s new collaboration and synchronization feature. The work is not yet ready to be released to the public but we hope to completely finish the work in the next few months. However, even the preview is really amazing! Compared to the current online viewer the new “Docear Web” offers lots of features. First of all, you can edit your mind maps online and not only on your own but together with your colleagues. The collaboration works both with your local desktop Docear and with your web-based Docear. That means, you can just start Docear Desktop as you are used to and colleagues of you may work on the same mind maps you are editing either on the Web or with Docear Desktop as well. Collaboration is in real-time, similarly to Google Docs. In addition, there is a Dropbox-like utility that synchronizes all your data between different devices (and the Web). As said, not everything is already fully functional but the preview version has at least all the basic features and gives you a very good idea what to expect for the final version.

(more…)

Docear 1.0 (RC1) released with new workspace and new UI (ribbons)

The last version of Docear was released three month ago and you might wonder what we were doing. Well, I can tell you we were really busy. Besides working on some research papers for conferences in Indianapolis and on Malta (read here and here), we finally implemented two major milestones for Docear. These two milestones actually were the last ones we had on our road-map for releasing the final 1.0 version of Docear. And here it is, Docear 1.0 (RC 1) with:

1. A new setup dialog

We have completely redesigned the dialog that appears when Docear is first started. We believe it to be much more user friendly and intuitive. We also listened to those users who criticized that our terms of service had to be accepted even when no online services were activated. Now you have the choice. You can either use Docear as a registered user and enjoy the full potential including PDF metadata retrieval, online backup, online mind map viewer, and recommendations. Or you can use Docear as a local user with no data at all being submitted to Docear and no requirement for accepting any terms of service (just use Docear as you would use any other GPL desktop software).

Docear's new setup dialog

(more…)

Three new research papers (for TPDL’13) about user demographics and recommender evaluations, sponsored recommendations, and recommender persistance

After three demo-papers were accepted for JCDL 2013, we just received notice that another three posters were accepted for presentation at TPDL 2013 on Malta in September 2013. They cover some novel aspects of recommender systems relating to re-showing recommendations multiple times, considering user demographics when evaluating recommender systems, and investigating the effect of labelling recommendations. However, you can read the papers yourself, as we publish them as pre-print:

Paper 1: The Impact of Users’ Demographics (Age and Gender) and other Characteristics on Evaluating Recommender Systems (Download PDF | Doc)

In this paper we show the importance of considering demographics and other user characteristics when evaluating (research paper) recommender systems. We analyzed 37,572 recommendations delivered to 1,028 users and found that elderly users clicked more often on recommendations than younger ones. For instance, users with an age between 20 and 24 achieved click-through rates (CTR) of 2.73% on average while CTR for users between 50 and 54 was 9.26%. Gender only had a marginal impact (CTR males 6.88%; females 6.67%) but other user characteristics such as whether a user was registered (CTR: 6.95%) or not (4.97%) had a strong impact. Due to the results we argue that future research articles on recommender systems should report demographic data to make results better comparable.

(more…)

Docear at JCDL 2013 in Indianapolis (USA), three demo papers, proof-reading wanted

Three of our submissions to the ACM/IEEE Joint Conference on Digital Libraries (JCDL) were accepted. They relate to recommender systems, reference management, and pdf metadata extraction:

Docear4Word: Reference Management for Microsoft Word based on BibTeX and the Citation Style Language (CSL)

In this demo-paper we introduce Docear4Word. Docear4Word enables researchers to insert and format their references and bibliographies in Microsoft Word, based on BibTeX and the Citation Style Language (CSL). Docear4Word features over 1,700 citation styles (Harvard, IEEE, ACM, etc.), is published as open source tool on http://docear.org, and runs with Microsoft Word 2002 and later on Windows XP and later. Docear4Word is similar to the MS-Word add-ons that reference managers like Endnote, Zotero, or Citavi offer with the difference that it is being developed to work with the de-facto standard BibTeX and hence to work with almost any reference manager.

(more…)

Docear4Word 1.1: Support of “Suppress Author”, “Author only” and some other nice options

Docear4Word 1.1 is available for download and it offers two new features that will improve your work with references in Microsoft Word a lot. Actually, we added two new elements to the “Add References” dialog.

The first one is a “Docear->Docear4Word” button. It’s intended for adding several references at once when you have multiple BibTeX keys in our clipboard. And here is how it works: Most references managers (e.g. JabRef and Docear) allow you to select several reference entries from the database and copy their BibTeX keys to the clipboard. That means you have a string like “Cohen05,ritchie2008,Eto12” in your clipboard. Now, when you press the “Docear->Docear4Word” button, Docear4Word will automatically get that string from the clipboard, identify the BibTeX keys and select the references belonging to the keys. This will make inserting several references at once much easier.

New Docear4Word Features

(more…)