Slipping Through the Cracks:
What Victor Ninov’s Scandal Reveals About Scientific Communication
Eleanor Hill
When I have free time, I like to scour YouTube for video essays. These are typically extremely long videos going into a deep dive on a topic of varying importance, from recapping a TV series to breaking down political mysteries. One of my favorite ones is by the user BobbyBroccoli, who spent almost two hours reporting on one of the biggest scandals in the scientific world: Victor Ninov and the element hunt of the 20th century. Victor Ninov is a Bulgarian physicist who faked the discovery of five different elements during his tenure at Lawrence Berkeley National Laboratory in California and GSI Helmholtz Centre for Heavy Ion Research in Germany. He carved a niche out for himself in an extremely niche field so that no one could check him on his work and hand drew most of his ‘groundbreaking’ discoveries (BobbyBroccoli, 2022). He and his work were so well received that it was thought that Ninov would win a Nobel Prize. Instead, he was caught and fired from Berkeley when several of his peers tried and failed to replicate his work (Seife, 2002, para. 2). His fraud, along with a similar case only two months later, inspired a complete overhaul of the scientific method by adding replication and peer review guidelines (BobbyBroccoli, 2022, 1:10:50.) How is it possible that such a large case of fraud could get so far as to receive speculation about a Nobel Prize? The answer comes from a large oversight in the peer-reviewing process, a key step in scientific communication. However, there are problems throughout the process. The standards used for scientific communication in America’s public spheres are insufficient and irresponsible, and the biggest contributor to this invisible but persistent problem is the poor scientific literacy present in all spheres of communication.
How Did It Get Here: Problems with Scientific Literacy
To begin, understanding what scientific literacy is, how it is taught, and the consequences of poor application will highlight how important it is and the problems that emerge when it is lacking. In a summary report done by the National Health Institute on science literacy in the United States, authors Snow and Dibner (2016) define the term as a skill that “enables people to both engage in the construction of new knowledge as well as use information to achieve desired ends” (p. 1). Essentially, scientific literacy is a person’s ability to understand scientific concepts. If someone can explain what an atom is, the water cycle, or how the moon orbits the Earth, then they have basic scientific literacy. The public can generally understand these basic concepts, but without more in-depth scientific education, more complicated topics (nuclear fission, ocean temperatures rising, gravitational pull) become vastly more difficult. Someone can understand the ‘what’, but get caught up on the ‘how’ and the ‘why’. Unfortunately, just improving science education is not enough. The problems of scientific literacy extend far beyond the realm of scientific understanding.
Snow and Dibner (2016) expand their definition of scientific literacy to include “[an understanding] of scientific processes and practices, familiarity with how science and scientists work, a capacity to weigh and evaluate the products of science, and an ability to engage in civic decisions about the value of science” (p. 1). This involves applying scientific knowledge to broader contexts, but because it is taught even less frequently than basic scientific literacy, the skills needed to read and process scientific news and advocate for policy change are lacking. In short, most people never reach the second level of scientific literacy. This problem turns into a snowball effect, making each more complex topic more and more inaccessible, and they must rely on others to convey that information. One clear example of this is the public’s practical understanding of the scientific method.
The Misunderstanding of the Scientific Method
Despite being taught every year in school, the nuances of the scientific method in the real world cause strife and distrust of scientists. In a critique of the scientific method, editor-in-chief of the American Journal of Neuroradiology Dr. Mauricio Castillo defines it as “a set of ‘methods’ or different techniques used to prove or disprove one or more hypotheses. A hypothesis is a proposed explanation for observed phenomena…[that] needs to be proved or disproved by investigation” (2013, p. 1). Essentially, the scientific method is the system used to acquire and understand new knowledge, but by the end, it requires constant revisiting. For scientists, Castillo argues that the scientific method is held too tightly, despite it “[being] rigid and constrained in its design and produces results that are isolated from real environments and that only address specific issues” (Castillo, 2013, p. 1). Instead, he suggests a shift to a “model-based inquiry” that allows for fluidity and more widespread application of findings, which account for the natural chaos of the universe, thus making the research process easier and conclusions neater. This shift would make the process easier for scientists to do their work, but it does not address a larger problem with the teaching of the scientific method.
The last step of the scientific method, revisiting the conclusion and testing new hypotheses, is often overlooked in definitions and explanations. However, this might be the most important step, because revisiting allows for new discoveries and hypotheses. Advancements in medicine, food production, and technology all come from revisiting old conclusions drawn, opening them up again, and testing new problems to find better results. Americans look to scientists to give clear-cut and concise answers, since they are the experts. This means that when different conclusions come out, the scientists can be perceived as backpedaling or being flakey with their data, which sparks distrust in the public. Common examples of this include changing opinions on nutrition, like whether red wine is actually good or bad, if they should only eat or never eat red meat, which diets are helpful and which ones aren’t, and the list goes on. Americans don’t care about the constraints that the scientific method puts on researchers and how it doesn’t reflect the ‘chaos of the universe’, but they do care about results and what they mean. No one wants to trust an indecisive scientist, and if they can’t make up their minds on something as simple as wine, then why would the public trust them on more critical topics like vaccines or climate change? This is the snowball effect that comes from poor scientific literacy: a failure to understand a topic or process that seems small establishes bad habits and preconceived notions. Those bad habits are then applied to more complex ideas, replacing what should be good habits like critical analysis and incorporating established context, building even more bad habits and cementing those preconceived notions. A growing distrust in science means it’s harder to get people to understand and care about pressing global issues, as well as smaller things like medical advice or GMOs, all stemming from a misunderstanding of one step of the scientific method.
Conflicting Perspectives: Distrust and Assumptions
Scientific distrust is becoming a major problem in America’s current landscape because people in the field are not communicating clearly with the public. Patrick Boyle, a senior writer for the Association of American Medical Colleges, highlights several different reasons that distrust forms. He emphasizes two of these reasons: a public overwhelmed by too much information, and growing polarization (2022). These factors both become worse when communication channels are bad because the public is left to their own devices since they haven’t been taught what to do with this information. When people don’t know how to interpret all the conflicting headlines they see, it is more likely that they will align themselves with something that they already believe. Charles Schmidt, an acclaimed science journalist reporting on the communication gap explains, “people who aren’t inclined to pay close attention to an issue will learn about it from media outlets that reinforce their own social, political, or religious views. This makes “it possible for individuals to draw quick conclusions about complex topics that fit their own preconceptions” (Schmidt, 2009, p. 3). If someone generally believes in climate change, they will more likely read an article highlighting a new way to help fight it and believe the information therein. On the other hand, someone who doesn’t believe in climate change will look at that same article, either move on without reading it or read it and dismiss the information.
While this might not seem like a big problem at first, it can become a problem based on where the news comes from. Schmidt explains, “both Republicans and Democrats tend to rely on news outlets that affirm their own social values… and those outlets—together with input from like-minded friends and colleagues—can be more influential than the science itself” (Nisbet qtd. in Schmidt, 2009, p. 3). It’s a natural inclination for someone to read about topics they’re interested in, and not engage with something they don’t. Those topics will have established preconceptions, that most likely align with political affiliation, and be reinforced by certain news sources. However, due to the increasingly polarized state of American politics, bias can become more important than the information itself. Being predisposed to a certain type of language when learning about a scientific topic will shape and establish personal views on it. Then, “[as] more people see science as another politicized field — as many people see the movie industry and mainstream news — the more they will dismiss scientific findings that clash with their worldviews” (Boyle, 2022). The dismissal of these findings is often unfounded because the science behind it is not what is contested, yet it creates large areas of distrust because the science became political. For example, in 2023 the Pew Research Center studied trends between political affiliation and trust in vaccines. 88% of Americans say that childhood vaccine benefits outweighed the risks, yet there was a significant decrease in Republicans saying that childhood vaccines should be required to go to public school before and after the pandemic, 79% down to 57% (Funk, 2023). The view of the science behind vaccines has not changed, but the parts that have been politicized, mainly requiring vaccinations after COVID-19, have changed significantly. This is the problem with associating science with polarizing social issues, as political trends start to outweigh the science, and the public can’t catch it on their own.
Public Confidence and the Communication Gap
Before the news can become polarized, the information must first be communicated between scientists and journalists. Journalists are the bridge between the scientists and the public, but the relationships are tentative at best. In their influential book on the communication gaps between scientists, journalists, and the public, Jim Hartz from the Today Show and Dr. Rick Chappell from Vanderbilt University compiled several interviews from all three groups, asking their opinions on the others. The most prominent discovery was that scientists and journalists do not have confidence in the public when it comes to understanding scientific topics. Both groups said “the American public is often confused and gullible, due largely to the low level of scientific literacy in the population at large” (Hartz and Chappell, 1997, p. 41). The scientists mentioned poor scientific education as a reason for bad literacy more than any other topic. Some selected quotes include:
- “The media can pass off poor reporting of science because of the abysmal failure of most of the American populace to understand even the rudiments of the scientific method.”: Dr. Greg Wright of Chicago.
- “The schools have done an incredibly poor job of science education. In part, this is because of a shortage of good science teachers. Most of them can make more money elsewhere.”: Rusty Harvin, A doctoral candidate at Georgia State University.
- “Unfortunately, the dismantling of our public education system has rendered the teaching of science to nothing, so the journalists mostly know nothing of the subject and therefore cannot communicate with an equally ignorant public.”: Dr. Joanna Muench of Falmouth, Mass.
Even more explicitly, when asked about who is responsible for the poor knowledge of the public, “nearly half the journalists (46 percent) said the public is at fault, and 39 percent of scientists agreed with that assessment” (Hartz and Chappell, 1997, p. 48). In summary, both scientists and journalists assert that the primary reason for the communication gap is the fault of the public, whether that be bad education, a lack of interest, or a fundamental misunderstanding of topics. Sadly, this assertion is not unfounded. Hartz and Chappell (1997) report that “only one in 10 Americans thinks he or she is very well informed about scientific matters” (p. 77). It is important to ask though, how can the public change their confidence if no one will fix their ways?
Hartz and Chappell report that everyone shares the sentiment that scientific literacy is important and needs to be improved, but critically none of the respondents are taking any accountability for the problem. Science communication specialist Hans Peter Peters confirms this, as his research reports that “the data does not indicate abrupt changes in communication practices or relevant beliefs or attitudes in the past 30 years” (Peters, 2013), despite “a large majority of both scientists and journalists [feeling that] there is no fundamental reason why the process cannot be significantly improved” (Hartz and Chappell, 1997, p. 41). This is partially why the problem has persisted for so long, because both scientists and journalists think that their approaches to communicating media are sufficient and it is the fault of everyone else. One way to understand this contradiction better, and how it impacts the public, is to look at the use of jargon in scientific journals.
Use of and Response to Scientific Jargon
Jargon, or technical language well-known within the field, is a necessary aspect of scientific writing. The research that goes into journals is at extremely high levels, and it is important to condense background knowledge. A cancer research paper shouldn’t spend half of its word count explaining what cell division is to people who examine cell division every day. However, the standard use of jargon is often overdone. Research scientists are “oriented and focused on the research itself… [and] tend to be wordy, unnecessarily detailed, and overly technical. They fall into jargon that is incomprehensible to anyone outside their disciplines” (Hartz and Chappell, 1997, p. 35). As a result, their works are alienating to almost anyone who reads them, except those who are in the same discipline and niche. This means that when articles do make it past the initial audience, the information is lost because no one else can understand it. Journalists, the first people who must tackle these papers, “complained that scientists are much too wrapped up in esoteric jargon and fail to explain their work simply and cogently” (Hartz and Chappell, 1997, p. 41). One difference here is that the journalists take a more hostile position, and don’t consider the practical need for some jargon. Similarly, while the abstract is not enough to make a whole report on, it is the place where the information is explained simply and cogently. It is unfair to say that condensed information is not there.
Journalists also critique scientists for their lack of interest in relevancy. The scientists have a vested interest in their research, and naturally thinking it’s compelling and relevant, will not directly make a case for it. However, “journalists also frequently mentioned in their comments that science stories need to address the issue of relevance to the reader or viewer, often because the very nature of science research is ‘complex’” (Hartz and Chappell, 1997, p. 45). The relevancy of a topic is not always explicitly clear to someone who is not involved in it, and reporters must find a way to make their article compelling to more people because their job is not only to inform but to appeal to wider audiences. The goal of journalism is not the same as scientific publishing. The scientists reject this notion, claiming that “this [is] an unfair requirement that, in their view, doesn’t apply to other subject areas—crime and celebrity, for instance—covered extensively by the media” and that “the news media oversimplify complex issues” (Hartz and Chappell, 1997, pp. 41-45). This is a fair pushback from the scientists because as mentioned before, the goal of scientific publishing is not the same as journalism. It is unfair to ask the scientists to do all the work making their research interesting and accessible because they have another job to do. And yet, the scientists certainly don’t make it easy for the journalists. These clashing perspectives leave no hope for the public, as each group is more concerned with being right than giving the information to those who need it.
Bureaucracy to a Fault: Gatekeepers and Peer Review
This superiority and elitism are not restricted to scientists and journalists, though. There are two jobs within scientific publishing whose main goal is to filter out what information gets into the public’s hands. Gatekeepers are the lesser known of the two, and they are responsible for making sure the work published in a journal is of a high enough standard. There is not a rigid structure that gatekeepers apply to a journal, they mostly focus on relevancy and the importance of the work being reported on. That lack of structure means that the process is heavily aligned with a gatekeeper’s personal beliefs. Most commonly, the process looks like this: “The editor looks at the title of the paper and sends it to two friends whom the editor thinks know something about the subject. If both advise publication the editor sends it to the printers. If both advise against publication the editor rejects the paper. If the reviewers disagree the editor sends it to a third reviewer and does whatever he or she advises” (Smith, 2006, p. 1). The process is clearly subjective, so in reality, the chances of getting a paper published are as good as a coin toss. Publication is a major factor in scientists getting funding and keeping their jobs, so they are subjected to the whim of luck, and gatekeepers maintain all the control.
Gatekeeper biases are one of the main things keeping publications out of the journals, which is ironic considering the role of a peer-reviewer. Peer review is supposed to ensure that the information presented in the article is accurate and unbiased. This is what gives scientists their credibility and positions them as the experts that the public values. Schmidt explains, “As soon as scientists take up an advocacy role, regardless of the position or topic, they lose credibility as unbiased sources” (Holland qtd. in Schmidt, 2009, p. 4). The role of the peer-reviewer is important because keeping explicit biases out of papers gives the paper the best chance to be accepted by the public as a credible and reliable source. Peer reviewers are very good at weeding out these biases, but don’t always catch bad or falsified data, which is arguably more important. Contrary to popular belief, the peer reviewers do not replicate experiments in which they are given. Instead, they use the original source notes and prior knowledge to assess whether or not the data seems correct. This is what allows for not only incorrect data but fraudulent data to get published too, as was the case with Victor Ninov. Richard Smith, an editor of the British Medical Journal, tested this point, by inserting several major errors into papers before sending them to be reviewed. “Nobody ever spotted all of the errors. Some reviewers did not spot any, and most reviewers spotted only about a quarter” (Smith, 2006, p. 2). Peer review is the gold standard that is followed for quality work, but there is no reason for it to be. The only people who like the peer-review and gatekeeper systems are the ones who are in those roles because they are able to maintain their superiority over choosing what is good and what is bad. Both systems are widely regarded as faulty, yet it seems impossible to leave them behind. As Smith (2006) puts it, “[peer review] is compared with democracy: a system full of problems but the least worst we have” (p. 1).
A Student Perspective
At the same time, the critique of peer review is not widely known outside the scientific realm. Despite not being the target audience of peer-reviewed papers, teachers and students are taught that they are the best of the best and are critical sources to include in any essay or presentation. This misunderstanding, paired with intense jargon, paywalls, and a lack of advanced scientific literacy makes it nearly impossible for these students to get the most out of their sources.
The precedent set that the stamp of peer review is the marking of a good source does not set students up for success. These papers are deemed excellent by a scientist’s peers, which means they are likely not accessible to anyone else. Bricks of text that are stuffed full of jargon–too much to comprehend without opening hundreds of tabs of definitions–and underexplained abstracts make papers impossible to read without assistance. As a result, students are taught workarounds to read scientific articles. Often, teachers suggest that they read the abstract, conclusion, subheadings, and small sections that seem like they might be relevant to the point the student wants to make. This results in unintentionally cherry-picking data or quotes, while the article could be arguing the complete opposite point. Without reading the whole thing, there is no way of knowing whether or not the source is relevant to the essay, but it is far too difficult to read so skimming must be done. If the standard of peer review being an indicator of quality work was not so highly clung to, the research process could be a lot easier. There are plenty of scientists who are dedicated to making their work accessible, but it likely means that the peer-reviewers don’t deem them sophisticated enough for their prestigious journal. The information being published without the stamp of peer review is equally interesting, likely free, and at a level of understanding that more people can comprehend.
Conclusion
Scientific communication is an expansive system, including many perspectives and opportunities for education and outreach. In theory, all of the people involved from journalists, to peer reviewers, and the massive public would have extra eyes to promote impressive feats or catch malpractice before it gets too far. Unfortunately, the way it is implemented allows for far too much to fall through the cracks, impacting everyone on the way down. Victor Ninov took advantage of those cracks and almost rode the faulty system to a Nobel Prize. In his wake, his lab’s credibility was destroyed, tons of money and time went into revisiting all the work he ever touched, and the scientific community was shaken to its core. But this problem is not restricted to Ninov. In fact, this problem is not restricted to anyone. Solving scientific communication problems seems like it should be prioritized, but there are too many layers of complexity and ego for them to be fixed overnight. Everyone suffers from the loss of knowledge that stems from poor scientific communication, even day-to-day. But by caring about the integrity of learning, valuing personal worth, demanding clarity, and demonstrating interest in scientific problems, we can revisit scientific communication and test a new hypothesis.
Works Cited
BobbyBroccoli. (2022). The man who tried to fake an element. Retrieved from https://youtu.be/Qe5WT22-AO8?si=niye-R9SppOwQfPh.
Boyle, P. (2022, May 4). Why do so many Americans distrust science?. AAMC. https://www.aamc.org/news/why-do-so-many-americans-distrust-science
Castillo, M. (2013). The scientific method: A need for something better? American Journal of Neuroradiology, 34(9), 1669–1671. https://doi.org/10.3174/ajnr.a3401
Funk, C. (2023, May 16). Americans’ largely positive views of childhood vaccines hold steady. Pew Research Center. https://www.pewresearch.org/science/2023/05/16/americans-largely-positive-views-of-childhood-vaccines-hold-steady/
Hartz, J., & Chappell, R. (2002). Worlds apart: How the distance between science and journalism threatens America’s future. First Amendment Center.
Peters, H. P. (2013). Gap between science and media revisited: Scientists as public communicators. Proceedings of the National Academy of Sciences, 110(supplement_3), 14102–14109. https://doi.org/10.1073/pnas.1212745110
Schmidt, C. (2011). Communication gap. Mass Communication, 214–220. https://doi.org/10.1201/b13122-14
Seife, C. (2002, July). Elements 116 and 118 were a sham. Science. https://www.sciencemag.org/news/2002/07/elements-116-and-118-were-sham
Snow, C., & Dibner, K. (2016). Summary. In Science Literacy: Concepts, Contexts, and Consequences (pp. 1–9). essay, The National Academies Press.
Smith, R. (2006). Peer Review: A flawed process at the heart of Science and journals. Journal of the Royal Society of Medicine, 99(4), 178–182. https://doi.org/10.1177/014107680609900414