Skip to main content
opinion

Jim Woodgett is the Koffler Director of Research at Sinai Health System and a professor at the University of Toronto.

The world is seemingly drowning in fake news and bot-driven disinformation campaigns on social media, and is too quick to dismiss expertise. But a recent survey showed that public trust in scientists is, in fact, rising.

But scientists shouldn’t cheer blindly. We are as flawed as anyone, and trust is earned, not assumed. Scientists are just human beings trained to pursue the scientific method, an objective process that has proven remarkably effective, often in spite of the scientific establishment. The fruits of this human endeavour have driven improvements in quality of life, opened up new industries and, of course, wrought new problems. As a consequence, our society invests in science, building and equipping universities and research institutes with specialized equipment and recruiting some of our most brilliant minds. The research is often expensive, hyper-competitive and not of immediate use; its rewards can be both rare and elusive.

At its core, science is a marketplace of ideas that must be tested. Its success depends on practitioners being willing to take risks and who persist in their exploration even when ignored or, more damagingly, starved of resources. This also means failure must be an acceptable cost. If the only experiments proposed were those for which we already knew the answers, we’d learn nothing.

This creates a dichotomy. How might ideas of unknown but potentially innovative impact be sorted for support by limited resources – largely, publicly funded resources – while avoiding projects that could fall flat? Selection is typically handled by peer review in which scientists, in private, judge proposals from other scientists, and score them based on their potential for impact, quality and feasibility (often influenced by one’s track record). Within this competitive climate, scientists might be tempted to exaggerate a project or, in rarer cases, cheat outright by including falsified supporting data.

Of course, scientists are aware of these possibilities, and look out for them. There are many infamous examples of intentionally flawed science. Even Gregor Mendel, the Augustinian abbot who established modern genetic principles through breeding of pea plants, was thought to have embellished his results as they were, statistically speaking, too clean. Analyses performed by scientific journals have revealed up to a third of publications contained inappropriately manipulated images or other issues of concern. Websites such as RetractionWatch and PubPeer post examples of dubious scientific data.

Fortunately, science itself is self-correcting. But fraud wastes scarce resources (by, for example, having researchers try to reproduce fake data), causes loss of promising minds to science via disillusionment and disgust and, perhaps most importantly, taints the public’s confidence in science.

Unlike in movies, professors typically don’t perform experiments themselves, instead spending time writing funding proposals and manuscripts, teaching, and reviewing trainees’ data. Who, then, is responsible for detecting and dealing with laboratory transgressions? The onus is on the professor, who is expected to oversee the operations of their research team. After all, they are most familiar with the work and most in control of the funding, and it is their reputation at stake – reaping rewards when successful, or possibly limiting their career in case of failure.

Complicating the situation is the inherent imbalance of power between professors and their trainees, the difficulty in distinguishing between honest and intentional mistakes, the potential intimidation of whistle-blowers, and institutions’ conservative instincts. In Canada, suspicion of malfeasance must be reported to the administration, which must then discern the facts and assess credibility of evidence. The process is often long and difficult, and outcomes are typically kept confidential. Few institutions wish to air their dirty laundry, yet this can also propagate a sense they have something to hide.

Then there is the case of Duke University this year. The school was found negligent when government-funding agencies had grounds to believe it submitted fraudulent applications. When a whistle-blower took the case to prosecutors, Duke paid US$112-million as a settlement, the equivalent of almost a fifth of the annual budget of the Canadian Institutes of Health Research. The whistle-blower received more than US$33-million. That is certainly one way to grab institutional attention.

How might scientific misconduct be better handled in Canada? Like any disease, it warrants a combination of prevention and treatment. I co-administer a mandatory course to incoming graduate students that discusses research ethics, best practices in experimental design, how to spot breaches and then what to do, as well as examples of reported misconduct – perhaps something that should be extended to the faculty. And Canada might have use for an equivalent of the United States’s independent Office of Research Integrity, which has the authority to investigate potential cases of misconduct and to publish its findings.

Both would come at some expense. But the risk of declining public credibility and support would be a far greater cost to science.

Keep your Opinions sharp and informed. Get the Opinion newsletter. Sign up today.

Interact with The Globe