Peer-reviewed publications are the currency of academia, and are the primary driving force in determining who gets faculty jobs. There are other considerations, but this is #1. This hasn’t always been the case, take a look at older faculty CV’s, and measure the number of papers they had when they were hired (1 or 0), and what their yearly rate was prior to tenure. But, now that they are tenured and in charge of hiring they’ve come up with a new set of rules. I’ve been told that if you want tenure, you should be aiming at 2-3 peer-reviewed papers a year. That doesn’t count everything else, but still, 1 is clearly too-low, and averaging less than one dooms you to the academic margins (say hi to me when you are there). Again, there are exceptions, but this is largely the message. A person’s success or failure on the job market is often chalked up to their publication record, or the all-powerful H factor.
It is obvious that this is a relatively new phenomenon. Publishing has always been important, but the volume of expected papers continues to increase (well past what many older tenured faculty have ever produced, btw). Publishing is an essential part of science, it allows ideas to be spread and be tested. That is obvious. But is the push for volume good for science? For an extreme, but likely not uncommon example, consider Peter Higgs, emeritus professor, theoretical physicist, and Nobel Laureate, who has recently argued that in today’s academic climate he never would have landed a job, let along tenure.
The whole push for publications isn’t just about writing papers. I am now in a position where I am asked to review papers and proposals every so-often. I am also someone who keeps up to date with recent publications, both on my own, and as part of reading groups. And I’ve noticed something: most papers aren’t that great. A few a year are fantastic, most are meh, and a small number are terrible. Even things published in the highest profile journals have a less-than-50% chance of being really well written, IMHO. They aren’t necessarily wrong, just missing something. I don’t know how long this has been the case, my view is of course biased. The only old papers that I know of are the ones that have already stood the test of time – I rarely browse a GSA Bulletin from 1982 just because. But still, when I browse new issues and download the most recent papers, my most common reaction is meh. Important to be published, but a little more time, a little more data, and a more careful interpretation would have served them well. I’ve also read plenty of papers that weren’t reviewed as closely as they should be. I am not saying they are wrong or bad, but that they could have been much better.
When I read a paper that is relevant to my work, review a paper, or have one to discuss in a reading group, I usually try to dig into it. By this I mean that I look carefully at the figures and the tables, read the paper more than once, check the references, and of course, download the data repository. As methods become more and more specialized though, my ability to critically evaluate the data and methods becomes more narrow. When I am working on papers that deal with thermochronology, I am golden. Geochronology, pretty good. Regional geology, depends where.
OK, so some things we can all agree on:
1. It is important to publish all data, even if the interpretation is uninteresting or indeterminate, the data needs to be out there. There are no real failures in science, and there is no roadmap to discovery. Despite the love we give for people who have awesome results, they aren’t necessarily any better at science, or any more important, than the hundreds of studies that haven’t come up with anything interesting.
2. The more papers that are published, the more papers you are asked to review. The more you are asked to review, the less time you spend on each one (especially since you get absolutely no credit for reviewing papers).
3. Good papers require time to write, especially when the data is complex. They also require funding, lab time, and field work (or some variation of these things).
So this has me thinking, what is a realistic and sustainable rate at which good papers can be published? The rate of expected publication continues to increase (based on recent hires I’ve watched), but where should it land? How does that vary from discipline to discipline? Shouldn’t we encourage scientists to write better papers?
For example, if you have the chance to publish 2 solid papers with lots of data and a well thought out interpretation, or 4 mediocre papers that are all missing a little something, the academic market forces would push you to chose neither, and somehow squeeze out 5 papers as quickly as possible. I think this is terrible. Meetings are for in-process studies, papers are for when things have reached a natural stopping point.
Now, I know that this is the way it is but complaining isn’t my goal. I know full well that this is the current state of academia, but it is not sustainable, and I am curious where it ends. Is the best possible solution? Is it the best way to do science? The community could change whenever it wanted to, so shouldn’t it look to create the best climate, and not just be content perpetuating an unrealistic norm?
The most dangerous phrase in the language is, “We’ve always done it this way.” – Grace Hopper
A colleague of mine is now asked to review papers or proposals at least once a week. She’s asked because she is great at what she does, but still, it is absurd to think she could keep up that clip, especially when reviewing papers gives you absolutely nothing to add to your tenure file or CV. If we want people to take time with their reviews and do a good job, then what is reasonable?
I don’t know the numbers. I’ve been trying to do some math in my head. How many people are there in the world who are qualified to critically review thermochronology data and models? If we need at least 3 reviewers per paper (plus an AE), and we expect them to review 1 paper a month (seems reasonable if you want them to do a good job), then how many papers could possibly be submitted and reviewed properly? Of course people other than thermochronologists could provide good criticism, but if we want to make sure that the central data, methods, and models are correct, we need at least a few specialists.
So where is the upper limit? Where do we maximize both production and quality? Where do we allow for in-depth reviews without requiring reviewers to volunteer too much time? Are there ways to make work as a reviewer worth more to a person’s career? How much time should the science behind a paper take? How much does that vary based on geoscience sub-field or career stage? I don’t know the answers to these questions, but it seems important. Things can’t speed up forever, especially without sacrificing quality. The current expectations are approaching a fast-food mentality, not really surprising seeing what academia is doing to its employees, where quantity is king. Over 4 billion papers published!
So I’m curious, what are your numbers? How long does a paper take, start to finish including field work, sample processing and analysis, and writing? How many could you sustainably write per year? How many can you review critically per year?