Many scientific papers receive little attention initially but become highly cited years later. What groundbreaking discoveries might have already been made, and how can we uncover them faster?
The scientific literature is vast. No individual human can fully know all the published research findings, even within a single field of science. Regardless of how much time a scientist spends reading the literature, there’ll always be what the information scientist Don Swanson called ‘undiscovered public knowledge’: knowledge that exists and is published somewhere, but still remains largely unknown.
Some scientific papers receive very little attention after their publication – some, indeed, receive no attention whatsoever. Others, though, can languish with few citations for years or decades, but are eventually rediscovered and become highly cited. These are the so-called ‘sleeping beauties’ of science.
The reasons for their hibernation vary. Sometimes it is because contemporaneous scientists lack the tools or practical technology to test the idea. Other times, the scientific community does not understand or appreciate what has been discovered, perhaps because of a lack of theory. Yet other times it’s a more sublunary reason: the paper is simply published somewhere obscure and it never makes its way to the right readers.
What can sleeping beauties tell us about how science works? How do we rediscover information the scientific body of knowledge already contains but that is not widely known? Is it possible that, if we could understand sleeping beauties in a more systematic way, we might be able to accelerate scientific progress?
Sleeping beauties are more common than you might expect.
The term sleeping beauties was coined by Anthony van Raan, a researcher in quantitative studies of science, in 2004. In his study, he identified sleeping beauties between 1980 and 2000 based on three criteria: first, the length of their ‘sleep’ during which they received few if any citations. Second, the depth of that sleep – the average number of citations during the sleeping period. And third, the intensity of their awakening – the number of citations that came in the four years after the sleeping period ended. Equipped with (somewhat arbitrarily chosen) thresholds for these criteria, van Raan identified sleeping beauties at a rate of about 0.01 percent of all published papers in a given year.
Later studies hinted that sleeping beauties are even more common than that. A systematic study in 2015, using data from 384,649 papers published in American Physical Society journals, along with 22,379,244 papers from the search engine Web of Science, found a wide, continuous range of delayed recognition of papers in all scientific fields. This increases the estimate of the percentage of sleeping beauties at least 100-fold compared to van Raan’s.
Many of those papers became highly influential many decades after their publication – far longer than the typical time windows for measuring citation impact. For example, Herbert Freundlich’s paper ‘Concerning Adsorption in Solutions’ (though its original title is in German) was published in 1907, but began being regularly cited in the early 2000s due to its relevance to new water purification technologies. William Hummers and Richard Offeman’s ‘Preparation of Graphitic Oxide’, published in 1958, also didn’t ‘awaken’ until the 2000s: in this case because it was very relevant to the creation of the soon-to-be Nobel Prize–winning material graphene.
Both of these examples are from ‘hard’ sciences – and interestingly, in physics, chemistry, and mathematics, sleeping beauties seem to occur at higher rates than in other scientific fields.
Indeed, one of the most famous physics papers, Albert Einstein, Boris Podolsky, and Nathan Rosen (EPR)’s ‘Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?’ (1935) is a classic example of a sleeping beauty. It’s number 14 on one list that quantifies sleeping beauties by how long they slept and how many citations they suddenly accrued.
The EPR paper questioned whether quantum mechanics could truly describe physical reality. The stumbling block was the phenomenon of ‘quantum entanglement’, where two quantum particles have a history of previous interaction and remain connected in such a way that means any measurement of a property of one of them influences that property in the other, regardless of how far away from each other they are.
To Einstein, this meant that the particles must be communicating instantaneously, faster than the speed of light. This violates the principle of locality, which is fundamental to his theory of relativity. Einstein called this ‘spooky action at a distance’, and his solution to the contradiction was to specify ‘hidden variables’ that determine the state of a quantum system – variables that were beyond the Copenhagen interpretation of quantum mechanics defended by Niels Bohr.
The paper caused intense debates between Einstein and Bohr from its publication until the end of their lives. But it wasn’t until the late 1980s that the EPR paper saw a spike in citations.
The EPR paper wasn’t hidden in a third-tier journal, unread by the scientific community. Indeed, it generated intense debate, even a New York Times headline. But in terms of its citations, it was a sleeper: it received many fewer citations than one would expect because it needed testing, but that testing wasn’t feasible for a long time afterward.
In 1964, the physicist John Bell showed that if Einstein’s ‘hidden variables’ existed, it would lead to certain algebraic predictions, now called Bell’s inequalities. If these predictions could be supported by experiment, it would contradict quantum mechanics and vindicate the ‘hidden variables’ view.
A 1969 paper by John Clauser and colleagues got one step closer, by framing Bell’s inequalities in a way better suited to real experiments – which followed only in modest numbers in the 1970s, hampered by imperfect equipment: the best technology of the day was unfit to test the theory, such as light polarizers that were too low-efficiency to be reliable.
A new generation of far more conclusive experiments in the early 1980s was made possible by progress in laser physics. These experiments gave an unambiguous violation of Bell’s inequalities – and thus a strong agreement with quantum mechanics. Einstein’s ‘hidden variables’ proved to be superfluous. Since then, more technological advancements have made experimental measurements increasingly precise – indeed, as close to the ideal imagined by the EPR paper as possible.
This has all led to an explosion of citations of the EPR paper. The sleeping beauty – having had its slumber only mildly disturbed across several decades – is now wide awake.
In some cases, a sleeping beauty comes without the kind of great mystery attached to the EPR paper. In some cases, scientists understand something well enough – but just don’t know what to do with it.
The first report of the green fluorescent protein (GFP) – a crucial ingredient in many modern biological experiments because of its ability glow brightly under ultraviolet light, and thus act as a clear indicator of cellular processes like gene expression and protein dynamics – was published in 1962 in the Journal of Cellular and Comparative Physiology. GFP had been discovered in the jellyfish Aequorea victoria in research led by the marine biologist Osamu Shimomura.
Over the summers of the following 19 years, 85,000 A. victoria jellyfish were caught off Friday Harbor in Washington state in attempts to isolate sufficient amounts of GFP that allowed for a more thorough characterization. This resulted in a series of papers between 1974 and 1979. But as Shimomura admitted in one of the interviews many years later, ‘I didn’t know any use of . . . that fluorescent protein, at that time.’
In 1992, things changed. The protein was cloned, and the relevant genetic information was passed on to the biologist Martin Chalfie. Chalfie was first to come up with the idea of expressing GFP transgenically in E. coli bacteria and C. elegans worms. He demonstrated that GFP could be used as a fluorescent marker in living organisms, opening up new worlds of experimentation. GFP is now a routinely used tool across swathes of cell biology.
Chalfie and colleagues published their work in Science in 1994, citing Shimomura’s 1962 and 1979 papers – thus abruptly waking them up to a flood of fresh citations. A basic discovery in an obscure organism – Shimomura’s jellyfish – had been made into a nearly universal, ultra-useful molecular tool. It took decades of work, and a flash of insight from a receptive and ingenious mind, but Shimomura’s studies were finally being used by the rest of the profession.
Although it’s tempting to look at a paper’s ‘beauty coefficient’ – as the systematic study from 2015 called its numerical index of how long a paper slept and how many citations it received when awakened – and assume that the knowledge it described must have been suddenly rediscovered, this isn’t always the case. Three examples come from papers in statistics that, on a first, naive look, all appear to be textbook sleeping beauties.
The first is Karl Pearson’s 1901 paper ‘On Lines and Planes of Closest Fit to Systems of Points in Space’. It looks like a classic case of a sleeping beauty: it was published in a primarily philosophical outlet with the rather unwieldy name of The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, and seems to have slept soundly for a whole century, only being fully awakened in 2002 with a huge surge of citations.
It’s certainly true that the twenty-first century brought with it many more ways to use Pearson’s 1901 insights. What he had described was what eventually became the statistical workhorse known as principal components analysis (PCA) – which became particularly useful after the advent of digital ‘big data’ to discover patterns and summarize large, unwieldy datasets in a smaller number of variables. But even without those datasets, the technique of PCA itself was well used across the entire twenty-first century, from psychology to palaeontology.
It’s hard to say why the 1901 paper suddenly started being cited around 2002 – the explanation could be pure luck and social dynamics, with one study happening to cite it and others following suit – but it wasn’t because PCA, which by that point was taught in every basic statistics course, had been ‘rediscovered’.
A second example is Fisher’s exact test, commonly used to determine the statistical significance of associations between categorical variables (for example, to test whether the proportion of recovered patients is different between two treatments). It was published by the eponymous Ronald Fisher in 1922, and appeared to ‘awaken’ in 2006. But as with PCA, Fisher’s exact test was routinely used across the twentieth century, becoming a regular part of analysts’ tool kits.
Finally, there’s the Monte Carlo method of statistics, which uses random sampling to produce numerical answers to problems that, while solvable in principle, are too complex to attack head-on. It was developed by a group of researchers including Stanislaw Ulam and John von Neumann in the 1940s at the Los Alamos Laboratory, intended to solve the problem of neutron diffusion in the core of a nuclear weapon.
In 1949, Nicholas Metropolis and Ulam published a paper titled ‘The Monte Carlo Method’ laying out its foundations – but looking at the paper’s citations, it seemed to remain dormant until 2004. Again, though, the method it described was already having a profound impact across many different fields, and in practice the paper wasn’t a true sleeping beauty.
Citations are currency in science, but it’s easy to rely on them too heavily. The statistical cases described above illustrate that merely looking at citations, and not understanding the full context of a study, can be misleading: we might end up asking ourselves why a paper is a sleeping beauty when really it isn’t a true example of the phenomenon.
With that caveat on the record, we can look at a final example of a true sleeping beauty – one that perhaps has the most to teach us about how to awaken dormant knowledge in science.
In 1911, the pathologist Francis Peyton Rous published a paper in which he reported that when he injected a healthy chicken with a filtered tumor extract from a cancerous chicken, the healthy chicken developed a sarcoma (a type of cancer affecting connective tissue). The extract had been carefully filtered to remove any host cells and bacteria, which might be expected to cause cancer, so another factor must have been at play to explain the contagious cancer.
It turned out that the cause of the tumor in the injected chicken was a virus – but Rous wasn’t able to isolate it at the time.
The importance of his study, and the paper reporting it, wasn’t recognized until after 1951, when a murine leukemia virus was isolated. This opened the door to the era of tumor virology – and to many citations for Rous’s initial paper. The virus Rous had unknowingly discovered in his 1911 paper became known as the Rous sarcoma virus (RSV), and Rous was awarded the Nobel Prize in Medicine in 1966, 55 years after publishing.
Many other examples of tumor-inducing viruses followed: in rabbits, cats, nonhuman primates, and eventually humans (one example is the Epstein-Barr virus, which causes glandular fever, also known as mono, and which can cause Hodgkin’s lymphoma).
This indirectly led to the discovery of oncogenes: genes that, when mutated, have the potential to cause cancer, for example by promoting uncontrolled cell growth. That’s because scientists, fascinated by the discovery of cancer-promoting viruses, began to look at exactly how they had their effect. For RSV, it was discovered that one of the genes that helped it cause cancer had a counterpart in the chicken (and indeed human) genome known as SRC. Discoveries of many other virus-host gene pairings followed, allowing new insights into the etiology of cancer.
What can we learn from the case of the RSV, where important knowledge was hidden away and unappreciated for decades before it was rediscovered, seeding an enormously fruitful line of research? What can we learn from sleeping beauties more generally?
Some of the reasons for a paper’s slumber are technological. As we saw with some of the sleeping beauties from physics, it was only with the advent of new techniques (in this case in virology) that Rous’s finding could be fully explored and verified. As the biologist Andreas Wagner put it, ‘no innovation, no matter how life-changing and transformative, prospers unless it finds a receptive environment’.
Some sleeping beauties might have slept due to poor access to findings. In the early twentieth century it was hardly straightforward to lay one’s hands on a scientific journal to access the knowledge within: these days it’s easier, though still not as easy as it should be. The Open Access movement has made important strides in this regard, but it’s a reminder that freely available knowledge could accelerate scientific progress for the mundane reason that more scientists can read it and have it interact with their pre-existing ideas.
Another lesson is related to collaboration. It could be that the techniques and knowledge required to fully exploit a discovery in one field lie, partly or wholly, in an entirely different one. A study from 2022 showed empirically how the ‘distance’ between biomedical findings – whether they were from similar subfields or ones that generally never cite each other – determines whether they tend to be combined to form new knowledge.
‘Biomedical scientists’, as the paper’s author, data scientist Raul Rodriguez-Esteban, put it, ‘appear to have a wide set of facts available, from which they only end up publishing discoveries about a small subset’. Perhaps understandably, they tend to ‘reach more often for facts that are closer’. Encouraging interdisciplinary collaboration, and encouraging scientists to keep an open mind about who they might work with, could help extend that reach.
That, of course, is easier said than done. Perhaps the most modern tools we have available – namely, powerful AI systems – could help us. It is possible to train an AI to escape the disciplinary lines of universities, instead generating ‘alien’, yet scientifically plausible, hypotheses from across the entire scientific literature.
These might be based, for example, on the identification of unstudied pairs of scientific concepts, unlikely to be imagined by human scientists in the near future. It’s already been shown in research on natural language processing that a purely textual analysis of published studies could potentially glean gene-disease associations or drug targets years before a human, or a human-led analysis, would discover them.
The AI technique of ‘contextualized literature-based discovery’ aims to mine the scientific literature to discover entirely new hypotheses that human scientists can then test. At present, it produces a lot of non-useful combinations and ideas – but as the AI models improve, we could use them to sift through that vast scientific literature in ways that no human could ever do, finding sleeping beauties – or at least partial sleeping beauties – that ‘wake up’ when combined with some other important piece of knowledge.
Most scientific discoveries emerge in the vicinity of earlier findings – the search for new knowledge, as one recent study put it, ‘is dominated by local exploitation of the familiar over novel exploration of the unknown’. That’s only natural: scientists, like humans in general, stick to what they know, mainly reading the literature from their own fields.
Now that we know the power of sleeping beauties, that needn’t be the case. Nor does it need to be the case for AI. Complementing human reasoning with AI has the potential to accelerate scientific discovery, expanding the scope of our collective imagination.
These efforts in literature mining will likely awaken many still-sleeping beauties in the scientific literature along the way.