top of page
bruce-mars-xj8qrWvuOEs-unsplash.jpg

Evaluating Academic Sources

Photo by bruce mars on Unsplash

Evaluating Academic Sources (Source Material Part 4)

Once you've determined that the primary source is published in an academic journal, it may still not be clear if the work is competently peer-reviewed and the journal where it's published is a reputable publication.

 

First, we want to keep our eyes open to whether the source is published in a low quality venue, such as predatory journals and vanity publishers. What are these? Have a look at the links to learn more. There are lots of these low quality journals out there, and academics are constantly exposed to emails soliciting submissions. Coupled with this, there is a huge incentive for academics to be publishing their work. Unfortunately, that sometimes leads to the publishing of low quality work at venues that don't care about quality. Hopefully, most academics are skeptical of journals in their field that they haven't heard of before; however, if you aren't in academia almost all journals in any given area are one's you haven't heard of and it's hard to know if it's one you shouldn't trust. It's your job to verify—at minimum, if you do a quick Google search, have other scholars noted problems with the journal? Don't trust anything that comes from a journal with a poor reputation.

But publication in an upper-tier journal isn't perfectly indicative of good ideas or science. A famous example that set alarm bells ringing in my own field is Daryl Bem's paper published in the Journal of Personality and Social Psychology (a well-regarded psychological journal) purportedly finding evidence that ESP (extrasensory perception) is real. Much has been said about this paper in the years since it's publication. Ultimately, the outlandishness of this paper's apparent findings was one piece of a much larger puzzle indicating that something was seriously amiss in much of how psychological science had been conducted to that point. Many exciting and necessary changes are underway to improve the field.

So if poor research can sometimes be published in an apparently good journal, how do we know we can trust other works? That's the big question that scholars are struggling to figure out today. For a look at problems in science in general, see the recent book Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth, by Stuart Ritchie. For some interesting and thought provoking conversations about the evolving state of psychological science in particular, check out the Two Psychologists Four Beers and The Black Goat podcasts.

There are some things to look out for to boost your confidence in the science literature—the key thing to look for is open science, which is about being transparent about everything you and your team did as researchers (see Center for Open Science and this article for an overview of recent advances). Some good signs are pre-registration of hypotheses and research methods and a willingness on the part of researchers to share materials and data they used.

 

This transparency allows others to see precisely what the researchers did to investigate their findings. It also allows other researchers to replicate the work, which is a key part of the scientific process. As a result of concerns about questionable research practices (QRPs), there has been a recent trend toward more openness. Accordingly, more attention is being paid to the value of pre-registration, whereby researchers register their hypotheses and methods before the work is conducted. This transparency places constraints on something all good scientists know is wrong—devising or revising hypotheses after the results are known. This is often referred to as HARKing (hypothesizing after results are known). HARKing is bad science, but it happens sometimes in practice—in the past, not a lot was in place to discourage it. However, with the movement toward pre-registration and sharing of materials/data, such practices should slowly be weeded out.

 

It's important to note that these recent trends are good practice and useful clues for the reader; however, their absence—particularly in the past when they weren't common—is not diagnostic of bad research. It just makes it harder to know whether good practices were used.

 

This makes everything for the non-expert super thorny and uncertain. Do consider, however, that it's better to acknowledge this complexity and uncertainty than to ignore it—I suggest that if you are seeing a lot of complexity where you didn't before and now have more questions than you have answers, you're becoming more informed and better equipped to know when you should and should not update your prior beliefs based on your reading. It's okay to hang out in a grey area, not knowing whether you can believe in a particular research finding—it's much better than assuming that every finding you come across is representative of reality (which is certainly not the case!).

​​Learning Check

bottom of page