Shining a light on misleading data

August 25, 2025

Two graduate students sitting in front of a computer screen

Teng-Jui (Owen) Lin and Kylee Hillman. Photo by Alfonso D. Restrepo.

Two PhD students win a national prize for promoting scientific rigor

Teng-Jui (Owen) Lin and Kylee Hillman, two graduate students in Professor Markita Landry's Lab at the College of Chemistry, have been named champions of scientific rigor and awarded the NINDS Early-Career Rigor Champions Prize. The prize, which includes a cash award of $7,500, recognizes early-career scientists who are actively promoting better research practices and raising awareness about the importance of scientific rigor. The students' winning project, "Quantifying data misrepresentations in biological research," tackles a common, yet often overlooked, problem in scientific publications.

The students' work focused on how scientific data is visually presented. They discovered that some common methods of data visualization can be misleading – particularly bar graphs and colormaps, which are frequently used in biological research. In fact, their research revealed that 30% of articles in high-impact biological journals misused bar graphs, while 80% misused colormaps.

The team learned that these issues aren't new—they've persisted over the years, despite a push for better data visualization practices. For example, bar graphs are often cropped or "truncated" on the y-axis, which can make a small difference between two groups. Similarly, they noted that the popular "rainbow" colormap can be misleading because it isn't easy for the human eye to interpret uniformly, and it's also difficult for people with color blindness to read.

The project idea first came to the students during their lab's journal club, where they noticed a recurring pattern of misleading data visualizations in articles from highly respected journals. They were surprised to discover that while many people had anecdotally commented on these issues, no one had actually quantified the problem. In other words, nobody had done the rigorous work of measuring just how widespread these issues were.

Unsatisfied with this lack of data, they decided to conduct their own study. Their initial pilot project revealed that over 40% of articles in one top journal had at least one visualization mistake. The results were so surprising that they were inspired to expand their project, creating a more systematic study to inform the scientific community about the risks of these misrepresentations.

Their prize-winning project not only quantified the problem but also provided a clear set of guidelines to help researchers avoid these common mistakes. Their work highlights a critical gap in scientific training and aims to ensure that published data is presented as accurately and transparently as possible.