I’ve been struggling to write up an article for submission looking at Heart Rate Variability (HRV) data over an even longer time frame from my earlier publication. This article gave new clarity in how to structure the paper to make it more clear what my research is about and what my “vision” is. What I’m trying to show is how HRV analysis can help make the reality of causing fewer migraines (which I see all the time in clinic) into a trackable, measurable physiological profile that you can show over time.
This article, Should Scientists Tell Stories lays out plainly the positives and negatives of this approach.
But there will be cases in which failed experiments bring a necessary nuance to the data, suggesting weaknesses in the argument or settings where the conclusions are questionable. Omission of such information may be unjustifiable. What is more, authors can easily segue into frank cherry-picking of data to support a desired conclusion, a practice that goes against the deepest goals of scientific research.
The rise of supplementary information has served an important function in providing a place for failed experiments and negative or unexplained results. Efforts by publishers to integrate supplementary information into the online version of the manuscript can crucially expose these data to readers without compromising a manuscript’s narrative. But hard limits on supplementary information or efforts to eliminate it altogether could complicate a paper’s narrative or, alternatively, whittle it down to a tightly told story with little room for more than one interpretation.