Last week I’d offered to do a full write up of how to read scientific paper critically, and asked if there was any interest in such a topic. No one asked for this, but here you are anyway. I did the research, and I’ve got nothin’ else! Besides which, it’s a fascinating topic to me, and every time I delve deeper into it, I get happier about my career decisions that led me away from the publishing train. Because a lot of the problems in science today stem from the way scientists are evaluated: have you published anything recently? In a serious journal? No? Ok, any journal will do. Did you have positive, real effects? No? No one cares you proved a negative, go back and get me a positive. We want results, or you’re fired! Which, human nature being what it is, leads to… well, it’s not science, unless you’re talking about the study of human psychology when backed into a corner and one’s livelihood threatened. In dire cases when the scientist’s government gets involved, one’s life might be at stake. And that’s even without getting into citation padding, authorial padding (there’s an ongoing scandal in South Korea where researchers have been adding their children’s names onto their papers to pad the children’s academic resumes), and duplication of results. Not replication, which is the gold standard, but using the same results in multiple papers, which is highly unethical and will lead, if caught, to a retraction of the paper. Enough of those, and you will lose your funding, position and have to start over.
Which brings me to the blog that highlights so many of these train wrecks, Retraction Watch. The blog not only highlights papers that have been retracted from publication, but weekly offers great roundups of links to various scandals in science. Like the recent case, where an eagle-eyed scientist who does this sort of thing as a kind of crusade for good science, spotted possible fraudulent image manipulation in over 200 papers. Most of us here on the blog have limited, if any, journal access, so we could well find ourselves in the position of reading a paper that has been retracted. There’s no shame in that – scientists who really ought to know better themselves do it all the time. In a recent study on the fraudulent and shameful Wakefield paper on autism, it was concluded that “Even authors who used terms such as “flawed” or “false” to describe the Wakefield paper didn’t always note the retracted status of the paper. My team felt that documenting the retraction carries a great amount of weight in demonstrating that the findings were fraudulent, and by missing out on this important piece of information, people may be under the perception that the work could be valid.”
But I could wander very far into the weeds, indeed, with this. It’s not that I don’t want you all to trust science. It’s just that in science, especially these days, you must read any paper with a critical eye. Looking at the small details in the margins can yield big clues, and that’s where I’ll try to focus. Looking at the design of a study is also important, as well, and crucial to generating good data is using good controls. For example, machine learning is all the rage in science currently – allowing computers to crunch vastly more data than is humanly possible seems like a wonderful idea, but… “Machine learning algorithms readily exploit confounding variables and experimental artifacts instead of relevant patterns, leading to overoptimistic performance and poor model generalization.” The paper goes on to suggest that adversarial controls that can anticipate the problems inherent in the algorithms can lead to better data. When you are talking about studies on humans, you want to look at study sizes – the larger the better – and things like control groups, blinded studies (blinding is good!) and how the reporting was done. Self-reporting of symptoms is dubious at best. Asking people to keep track of what they ate (for instance) or their pain levels for weeks, let alone years, is a recipe for messy data and unreliable results. Which is part of the reason nutrition science is such a hot mess right now.
When you are reading a paper, you will want to look at a few things right away: the journal the paper was published in. Some journals, as I mentioned last week, will publish anything if the authors are willing to pay a fee. Vanity publishing is no better in science than it is in fiction publishing, and the results are just as dubious. Look at the section (usually at the bottom of the first page) where any conflicts of interest are laid out. Having some mentioned here is not a bad thing – and the problem arises when the authors don’t disclose potential conflicts, which is invisible without a great deal more research on the reader’s part – but it is something to keep in mind when assessing any potential bias the scientists may have. They are human. There will be bias, but a properly designed study will still yield good data and results that should be reproducible. Sadly, there seems to be little to no inducement for the publication of results that are reproducing, and reinforcing, good scientific results. In fact, there seems to be a lot of antagonism toward this. Finally, look at the results themselves. This isn’t where you need to put on your science hat, it’s where you can put on your writing hat – what does the wording look like? Is it cautiously optimistic, straightforward, dry and factual? Probably reliable. Is it hyperbolic, sensational, and does the word ‘cure’ appear? Probably unreliable. Science that makes for a good story rarely looks like it on the surface. It’s knowing the possible ramifications of this result that leads to the enthusiasm and excitement of potential, and as science fiction authors, that’s our job. We take the science and run with it, making it exciting and real, and inspiring.