Exciting news: Jason Hoyt, the founder of Ologeez (a semantic frontend for PubMed), is joining Mendeley! Jason holds a Ph.D. in Genetics from Stanford University. At the moment, he is still based in Palo Alto, but once the visa issues are sorted out, Jason will be joining us here in London as our new Research Director. TechCrunch broke the story today with a headline that made our geek hearts beat faster, comparing us to a Klingon battle cruiser de-cloaking in London.
To get started, Jason wrote up his reasons for joining us, and how Mendeley can help change the Impact Factor. Over to him:
Changing the Journal Impact Factor
Right, so the first thing I had to ask myself was “Why on earth would I move from San Francisco, leaving behind a cushy life for London, and work for a reference management start-up?” Surely any rational person would find this a bit odd.
Well, I’m not going to answer by talking about how great the team is or how enthusiastic the founders are about improving research, which is certainly all true. Rather, let’s take a real-world example of how the “tech” behind Mendeley is already making a difference with how we view the impact factors of research.
The credit for the following observation actually goes to Dr Cameron Neylon. A few weeks ago, Cameron noticed that the Open Access journal PLoS One had climbed to the number five spot of the most read journals in biological sciences. He took a screenshot of the statistic found on Mendeley and posted it on Flickr:
In true Web 2.0 fashion, the Flickr image was noticed by the managing editor of PLoS One, Peter Binfield, and made its way over to Twitter:
Certainly, it’s great that his young journal has made it up there with these established top tier journals, but like I always ask myself when reading literature for my research, “What’s the real significance here?”
In terms of Open Access journals, it has some major significance. People are downloading and reading Open Access content and probably feel it is just as worthy as closed access material from well-established, high-impact journals. We could be cynical for a moment and propose this statistic is just the result of the “early adopter syndrome.” In other words, people likely to be the first to use “Web 2.0” reference management software such as Mendeley are also more likely to support Open Access journals. To that, I’d answer perhaps, but we’re still scientists and we don’t just read Open Access because it has a free model appealing to “anarchist everything on the Internet should be free” types. Rather, we will read Open Access if it is beneficial to our research. Anyway, we’re veering off onto a tangent here, so to get back to the story…
The significance isn’t primarily about PLoS, it’s about the ability to finally measure impact at the article-level in real-time.
In the 1950s, an equation for measuring the impact of journals was developed, which is now calculated via non-transparent standards by Thompson Scientific. There are plenty of resources on the background, so I’ll point you there. The effect and quality of the impact factor since its inception has been fiercely debated and misused. Major journal publishers love it because a single highly cited article is enough to boost an entire journal’s impact factor, thus ensuring continued subscriptions. University librarians hate it because of those high subscription fees. On the other hand, university administrators and grant reviewers like it because it’s an easy-to-use metric when determining worthiness for tenure or grant applications. To emphasize the ridiculousness of that last use of the impact factor – consider that you may be denied a job, grant, or tenure simply because your article appeared in a “lower-tiered” journal, even if your article greatly contributed to an important scientific advance. This isn’t to say administrators and grant reviewers are ignorant of the drawbacks to journal impact, nor is it the only metric they use. That said, there is one thing everyone acknowledges, it is a flawed system.
Luckily, things are starting to change. For instance, since March of 2009 PLoS has embarked on an article-level metrics campaign. Each article is presented with a related-content tab and individual citation information. Starting in June, you’ll be able to see some real-time impact information on each article to gauge its popularity. PLoS is a major exception in the publication industry with very few peers willing to reveal the same level of information (others include the Journal of Vision and BioMed Central). This brings me to the significance of Mendeley’s technology and partly why I decided to join. Since we are aggregating data from every article in every journal and correlating it with user behavior we can accomplish at least three major things:
- We can now measure the impact of an individual article in terms of readership. For example, how many downloads, average time spent reading, how often the article is shared.
- The measurement is in real-time. We no longer have to wait two years or more before seeing how often an article is being cited to determine its worth.
- We are leveling the playing field for all journals. Through readership statistics, each article can now be rated upon its own merits rather than by the journal it happens to get accepted into for publication.
Obviously this has some major implications for authors, grant reviewers, universities, employers, and publishers. It is also worthy to note that it would be unreasonable to assume this will change how decisions are made overnight in the tenure-track process, for example. And it is naïve to assume that this is a perfect system right out of the gate or that it will replace the standard journal impact factor. In fact, it should be used in combination with other metrics in a holistic approach. One example is the “author impact factor” or h-index, though this too is flawed, as it is more likely articles appearing in high-impact journals will be cited. The h-index is then arbitrarily increased, as lower-tiered journals may be cut due to expanding library costs and never cited, despite the fact those articles may have greater relevance.
This leads to one final point. By measuring the real-time impact of single articles, it is more likely those “hidden gems” in lower-tiered journals that are not novel enough to change an entire field, yet are more relevant to your topic, will rise to the top via our planned recommendation engine. And excuse my pun, but that’s an entirely different article to be written up some day.
At a higher level then, Mendeley’s significance isn’t just about real-time impact factors and article-level metrics. It’s about using technology for the first time to crowd source data and forever change how research is done. That is why I’m crazy enough to move half-way around the world. Mendeley literally isn’t just another “Silicon Valley” start-up.
What are your thoughts on real-time article-level metrics?