Image Manipulation: A Relatively New Form Of Research Misconduct

Image Manipulation: A Relatively New Form Of Research Misconduct

By

Leonard Zwelling

The wizard men curing breast cancer

After blogging for these many years, I have found that I get ideas from many different sources. Admittedly, the news—print and electronic—is the most frequent source of inspiration, but sometimes it is emails from colleagues and readers. This is one such case.

A good friend and reader sent me the above publication from January 20, 2020 by the noted detective of research misconduct Leonid Schneider. Apparently there is a network of people who look for deficiencies in published papers and try to warn the public, journal editors, authors, and institutional officials about them.

This article cites many examples of what until the late nineties was a relatively rare form of research misconduct—image manipulation. Undoubtedly, the onset of this plague coincides with the ability of scientists to generate their own publication-ready images due to the power of personal computers and new software like Photoshop. This all came after my time in the lab when we had to take actual photographs to a graphic artist to make up the figures we submitted in papers and grants. It’s a different world and not one I know a whole lot about. I needed to get educated.

One of the problems in this world of gotcha scientific detective work is that the sources of the reports of the supposed findings of misconduct are themselves of highly questionable reputation. Many of these folks are really controversial. This may be unfair but one, named Clare Francis, is a known pseudonym for whom? No one seems to know and no one is quite sure of his (?) credentials.

On the advice of my friend, I contacted one of the rare scientific detectives who is considered the real deal—Elisabeth Bik. She actually writes peer-reviewed papers about her findings and is not at all “snarky” about it.

She wrote me right back and sent me links to a number of her publications and references to those publications in the lay press. Here’s what I found out.

In 2016, in the journal mBio (7:e00809-16, 2016; https://mbio.asm.org/content/7/3/e00809-16) Bik and her colleagues reported on an enormous review of 20,261 papers published in 40 journals between 1995 and 2014. She inspected each of these papers for evidence of image manipulation. She characterized the manipulations into three groups. These data were usually Western blots, micrographs, or FACS images.

Category I: Simple duplication. Figures with identical panels. Most of the time this was attributed to author carelessness and not research misconduct.

Category II: Duplications and repositioning. One part of an image was shifted or rotated and used to represent a different experiment, which it was not. This could often be intentional.

Category III: Duplication with alteration. This was more likely to be associated with intent to deceive, i.e., misconduct.

The findings of this study are frightening. Three point eight percent of the papers were found to have problems with the images. Half of these image manipulations appeared to be intentional and the problem is getting worse over the years studied.

In a subsequent publication looking at data in one journal only, Molecular and Cellular Biology, Bik and colleagues again found 59 of 960 papers with image duplications. Most of the errors were correctable but 10% of the papers were retracted. She extrapolates to a conclusion that about 35,000 papers in the biological literature may contain erroneous data. She and her colleagues did find that editorial screening of papers set to be published could catch many of these errors and the work it took to correct them prior to publication was far less than was needed to correct an already published manuscript. (https://mcb.asm.org/content/early/2018/07/18/MCB.00309-18)

Finally, in a later paper, Bik and her colleagues (Fanelli et al., https://link.springer.com/article/10.1007/s11948-018-0023-7) began to explore the origins of this problem. They found using only data published in PLoS ONE that there were statistically associated connections between Category II and III image duplications and developing country submissions. In other words, this is cultural. Interestingly, males were no more likely to duplicate images than were female scientists.

Is any of this helpful?

It certainly suggests that technology has abetted the ability of scientists to augment the significance of their work—sometimes to the point of falsifying the data. This in turn leads to erroneous conclusions about biology and medicine that can damage progress in research aimed at new therapies. It also suggests that journal editors are going to have to install processes to more rigorously screen the submissions they get and to do so willingly as the work to prevent the publication of bogus data is far less than that involved with correcting a mistake once in print.

The world has evolved a great deal from the day in 2007 when I ceased being the Research Integrity Officer at MD Anderson, but I cannot say it has evolved in a particularly unexpected fashion. Even back then I was beginning to have to review allegations of misconduct against scientists whose entire research record was electronically stored. There were no auditable paper notebooks. Photoshop has enabled image manipulation with previously unthinkable power.

Institutional officials will have to raise their awareness of yet another threat to research integrity at a time when research results involve not only fame and tenure, but money as well.

Dr. Bik has really raised my consciousness about this issue as has my friend who sent me the original article.

But in the end, nothing has really changed. Meaningful scientific progress will depend on the accurate and honest reporting of well-designed experiments, but now more than ever this is critical as there is so much at stake.

Leave a Comment

Your email address will not be published. Required fields are marked *