Words by

Why science needs outsiders

18th September 2025
23 Mins

Science has forgotten that the greatest breakthroughs often come from outsiders who are able to take a fresh perspective.

Many of the most important scientific advances came from unexpected people. A draper was the first to observe bacteria, a clockmaker solved the problem of longitude, a musician discovered Uranus, and a Hollywood actress helped invent secure wireless communication. Let’s call these people outsiders.

Most scientists are ‘insiders’ – experts and specialists who spend their careers inside one academic discipline, mastering its ideas and methods over time. Outsiders, on the other hand, may come from another discipline, work outside established institutions, or be early enough in their careers that they are not yet part of the senior establishment of a discipline.

Get the print magazine

Subscribe for $100 to receive six beautiful 120-page issues per year.

Subscribe

Outsiders often succeed because scientific progress is in part about generating models about how the world works, and in part about testing, applying and refining these models. The former is how we got quantum theory and the latter is how we got lasers and the MRI machine.

Insiders are often better at fleshing out theories in detail. But they get attached to their theories and can be bad at seeing when those theories need to change. Outsiders have accumulated less expertise, but being less attached to specific theories, they are more willing to update them through ‘paradigm shifts’: creating new theories to predict facts and define research questions. A productive system needs both kinds of work.

Academia has a comparative advantage in ‘outsider’ work – unlike industry research, which has a tendency to be applied, narrow, and focused towards a practical goal, academics naturally have the freedom and job security to take the outside view. But academia can be hostile to outsiders and is becoming more hostile as it comes to represent a larger and larger fraction of science.

This narrowing of opportunities for outsiders has weakened science’s ability to generate paradigm shifts. Many of the biggest scientific leaps began when an outsider spotted a puzzle, imported a method, or sketched a new theory. To encourage more of these leaps today, we need to create space for outsiders by giving people more freedom to switch fields, work independently earlier in their careers, and explore unusual ideas.

How academic science keeps outsiders out

Academia filters most funding, publishing, and hiring decisions through senior insiders, which favors ideas within existing paradigms. This matters especially much today, because industry has largely dismantled the long-term research labs that typically support exploratory work. 

Natural experiments suggest that social and institutional dynamics can suppress paradigm shifts. When eminent scientists die, publication rates among their direct collaborators decline, and publications by other scientists rise, whose work becomes more likely to be highly cited. The presence of powerful insiders often creates barriers that discourage new entrants and limit intellectual diversity.

Alzheimer’s research was dominated for decades by the theory that a protein called amyloid beta is the main cause of the disease. Though this hypothesis still has value, its near-monopoly on funding and attention sidelined other promising ideas, such as inflammation, blood vessel dysfunction, or the role of another protein called tau. This dominance owed more to institutional gatekeeping than the strength of the evidence. In interviews, scientists described how challenging amyloid dogma caused their papers to be rejected, grant applications to be blocked, and speaking slots to be cancelled. 

Such dominance can persist because access to both funding and publication usually means passing through review by journal editors and senior experts within a discipline. This often filters out work that challenges familiar ideas or crosses disciplinary boundaries. As Richard Posner, the American legal scholar and polymath noted: ‘The specialist will always be able to nail the generalist by pointing out that [they] don’t use the vocabulary quite right’. Reviewers give higher ratings to studies that support dominant theories, even when methods are identical.

These review processes would be justified if they were an effective way to ensure correctness. But there is reason to doubt this. Peer reviewers fail to catch 70–75 percent of major methodological errors, and most fraudulent papers are only uncovered after publication, often by whistleblowers rather than official processes. 

In psychology, fewer than 40 percent of findings replicated; in cancer biology, only 46 percent replicated, with median effect sizes 85 percent lower than originally reported. But it is difficult to challenge these ideas as journals rarely welcome negative replications, and funders seldom pay for them. Unsurprisingly, replication studies account for approximately one percent of papers. This makes it difficult for outsiders who spot anomalies to prove their work or get it noticed.

Outsiders with less conventional career paths also have a harder time getting hired. Many fellowships from the US National Institutes of Health are limited by age or time since degree and explicitly do not consider time away from research as an exception to these limits. In surveys, most scientists also believe that candidates with non-academic work experience are viewed negatively by hiring committees.

More and more, researchers are expected to stay within one field, even though the lines between disciplines are often arbitrary. Interdisciplinary scientists (with less than 50 percent of papers in a single field) represented 55 percent of researchers in the 1960s but fell to under 40 percent in the 2010s. Interdisciplinary research proposals are 20–30 percent less likely to be funded than comparable single-discipline proposals, and interdisciplinary papers take around 30 percent longer to clear pre-publication peer review. 

Researchers are also expected to spend significant time learning the accepted dogma of a field before making their own contribution. In the United States, biomedical researchers typically gain independence when they secure a major grant known as an R01 from the National Institutes of Health. Between 1980 and 2014, the proportion of R01-funded principal investigators aged 35 or younger fell from 18 percent in 1983 to 3 percent in 2010. First-time grant recipients have grown steadily older.

Simultaneously, we are experiencing a general decline in breakthroughs. According to a survey of top scientists, many recent Nobel-winning discoveries are less transformative than those of previous generations. In computing, economists estimated that the number of researchers working on semiconductor R&D would have had to increase eighteen-fold since the 1970s to maintain the same rate of progress described by Moore’s Law.

The narrowing of opportunities in academia matters more today because industry no longer offers a meaningful alternative for pure research. In the past, industrial labs like Bell Labs, DuPont and Xerox PARC provided space for high-risk, long-term research, often supporting people who didn’t fit the academic mold. But large companies have increasingly retreated from such ambitious research projects.

One indicator is the R&D 100 awards, which recognize the hundred most significant new technologies each year. The percentage of R&D 100 awards going to Fortune 500 companies decreased from 47 percent in 1975 to 6 percent in 2006. While startups now play a bigger role in early-stage innovation, they often lack the resources to support interdisciplinary teams or sustained exploratory work before success is proven. The frontier AI companies, which all have large internal R&D labs, are the major exceptions to this rule. They also show the value of these sorts of labs.

Thinkers need institutional support

Some major breakthroughs have come from scientists working in near-total independence. In the early 1960s, Edinburgh University academic Peter Mitchell left academia – due to both illness and dissatisfaction with research institutions – and set up a tiny independent lab in a manor house. There, he had time to fully develop his radical and initially unpopular speculation about how cells make energy: that membranes act like batteries, storing energy as a proton gradient. The idea eventually won him a Nobel Prize in 1978.

When Bell Labs shut down much of its basic research in 1994, Bell researcher Eric Betzig quit, set up a machine shop in Michigan, and tinkered with optics in his spare time. In 2005, he and collaborator Harold Hess built the first photo-activated localization microscope in Hess’s living-room, using roughly $25,000 of parts from eBay. Today, versions of these microscopes are standard in many molecular biology labs.

Stepping outside the system gave Mitchell and Betzig something insiders rarely get: uninterrupted time and social freedom. Without having to chase funding or conform to disciplinary fashions, they could keep working on ideas that seemed odd or premature for enough time to generate proof points. Though their efforts succeeded, they required uncommon personal risk and considerable entrepreneurial effort – costs that few researchers, especially those in secure posts, are willing to shoulder. It’s plausible that many similar theoretical advances were delayed until someone was willing to make such efforts.

Stomach ulcers were blamed on stress for much of the 20th century as it was the prevailing consensus that bacteria couldn’t survive in the stomach. When Robin Warren and Barry Marshall showed that Helicobacter pylori in stomach biopsies correlated with ulcers, their findings were largely dismissed. Winning acceptance required incredible persistence, and even after Marshall resorted to drinking the bacteria himself to demonstrate the link, it took more than a decade for antibiotic treatment to be officially recommended.

Independent work is more common today in fields like theoretical computer science and mathematics, where ideas can still thrive without labs or large teams. But it is not a meaningful alternative in areas like molecular biology or particle physics that demand expensive equipment to make serious contributions.

Einstein the outsider

Albert Einstein is usually recognized by images of his older self, a wild-haired elder statesman of academia. But this wasn’t how he began his journey. In 1905, a 26-year-old Einstein was a patent examiner who had graduated just a few years earlier and failed to get an academic position. Despite lacking experience or resources, he published four groundbreaking papers that year: introducing special relativity, explaining the photoelectric effect, proving the existence of atoms through the random motion of particles in fluids, and deriving mass-energy equivalence, the famous equation E=mc². 

Albert Einstein, aged 26, in 1905.
Albert Einstein, aged 26, in 1905.
Image
Lucien Chavan via Wikimedia Commons.

It is tempting to assume that Einstein was a singular genius who succeeded despite his inexperience. But there are many cases where outsiders with little experience in a field make the most important breakthroughs. 

Such outsiders include young scientists like Werner Heisenberg, who developed the uncertainty principle at just 25. They also include those who shifted fields mid-career, like Venki Ramakrishnan, who moved from physics to biology in his thirties and went on to win a Nobel Prize for work on the structure of the ribosome. Others made breakthroughs despite working outside mainstream institutions: Tu Youyou, who had no postgraduate training and worked in a little-known, state-run laboratory tied to a secret military research program, discovered artemisinin, a malaria treatment that has saved millions of lives.

Outsiders win by reframing, not refining

Compared with insiders, outsiders are less bound by the norms and methods of a field. This gives them a number of advantages: they are more likely to notice anomalies that others overlook, they can bring ideas or tools from other disciplines, and they can work for longer periods free from dominant intellectual trends.

One example is Alfred Wegener, who made his living studying polar air circulation but had no formal background in geology. In 1912, while comparing maps and fossil distributions across continents, he noticed uncanny jigsaw-style matching patterns between opposite shores, which geologists had dismissed as coincidence. His outsider status allowed him to challenge the dominant belief that continents were fixed, proposing the then-heretical idea of continental drift, which eventually laid the groundwork for modern plate tectonics.

Outsiders are often the ones who take anomalies – strange or inconsistent facts that insiders learn to overlook – seriously. Cognitive science has a name for this tendency: the Einstellung effect, where deep expertise leads people to fixate on familiar solutions and miss better ones. 

The ongoing interplay between neuroscience and artificial intelligence has been driven largely by interdisciplinary pioneers. Of the approximately 200 laureates ever named to the ACM Turing Award, Kavli Prize in Neuroscience, Gruber Neuroscience Prize, The Brain Prize and IEEE Neural Networks Pioneer Award, we found 18 whose prize citations explicitly connect AI with brain science. Of these, 14 were at least partly interdisciplinary by training.

One example is John Hopfield, a physicist who entered neuroscience in his forties with no formal biological training. Drawing on physics, he showed mathematically how a network of neurons could store and recall memories. His work gave both neuroscientists and AI researchers a shared mathematical language to think about learning and memory, and was eventually recognized by a Nobel Prize.

Geoffrey Hinton, who shared the Nobel prize with Hopfield, was originally an experimental psychologist before transitioning into computer science. He pioneered the development of artificial neural networks by creating models that mimic how the human brain processes information. 

Citations are not everything, especially for breakthrough science. But discipline-crossing outsiders can have success, even according to this traditional metric, if they are willing to wait. Interdisciplinary papers are most cited an average of seven years after publication, compared to three years for specialized papers. But longer-term impact can be substantial. When examining research published before 2000 (which allows long-term impact to settle), researchers who made disciplinary crossings between conceptually distant fields produced work that was three times more likely to land in the top 10 percent of citations.

Discipline crossing has also helped introduce new ideas in industry. From 2001 to 2004, 26 major pharma and chemical companies posted 166 unsolved R&D problems to InnoCentive, an open platform reaching over 80,000 scientists worldwide. Within six months, 30 percent were solved. The further a solver’s expertise was from the problem’s home field, the more likely they were to solve it. For example, a concrete engineer solved a longstanding oil contamination problem by applying vibration methods from construction.

Comprehensive reframing is a paradigm shift

Outsiders are more likely to generate what Thomas Kuhn called paradigm shifts. Science usually progresses by incrementally adding facts within an existing paradigm. A paradigm is like a rulebook for a field: it defines the assumptions, methods and examples that guide everyday research. But paradigms have limits. Over time, evidence accumulates that can not be explained within the existing paradigm. At this point, for science to progress, the paradigm must be entirely superseded by a new one. It is those revolutionary shifts where outsiders are most useful.

This pattern can be seen in the rise of quantum mechanics. By the late nineteenth century, most everyday phenomena (ranging from the paths of planets to the efficiency of steam engines) seemed explainable using physics derived from Newton’s laws. In 1874, physicist Philipp von Jolly reassured his students that physics was largely complete, except for ‘perhaps a dust particle or a small bubble to be examined and classified’.

But some of those ‘dust particles’ provided evidence that could never be explained through classical mechanics. When an object is heated enough – say an iron poker in a fire or the filament of a lightbulb – it gives off a temperature-specific pattern of colors known as blackbody radiation. Newtonian physics could not accurately calculate the observed distribution of these colors. This was only resolved when Max Planck, a former student of von Jolly’s, hypothesized that energy is not continuous but comes in tiny, discrete packets called quanta. This new model was able to accurately predict the observed spectrum.

Planck’s discovery was extended by Heisenberg, Niels Bohr, Erwin Schrödinger, and others into quantum mechanics – an entirely new framework for understanding matter and energy. Quantum mechanics, in turn, cross-pollinated with other disciplines to create new fields: explaining molecular structure in quantum chemistry and electron behavior in solid-state physics. Its principles now underpin much of modern technology. Lasers became possible once quantum theory revealed that atoms could be forced to emit identical photons. MRI machines were enabled by the quantum discovery that atomic nuclei have a property called spin, allowing their behavior in magnetic fields to be measured and turned into images.

New paradigms can also streamline and compress existing knowledge, making it more cognitively manageable as complexity increases. A helpful analogy here is the compression of language for efficiency: as societies develop, we create new words that encompass more information, allowing for the communication of more complex concepts while maintaining a sentence length that fits our cognitive capacities. Words like ‘ecosystem’ and ‘inflation’ compress sprawling, complex ideas into single terms, freeing up mental bandwidth and speeding up communication in the same way that a paradigm frees scientists from memorizing and repeating countless individual facts.

Strategies to manage complexity exist across many fields. Chess grandmasters don’t memorize where every piece goes – instead, they recognize familiar patterns of configuration, like a castled king or a common opening trap. These patterns are chunks of meaning that compress many details into a single unit, letting the expert bypass the limit of working memory. 

Similarly, as we expand our knowledge, new paradigms help us navigate higher complexity without endlessly expanding the number of concepts we need to learn. For example, before Darwin, biologists catalogued endless quirks of anatomy and behavior across species. But there was no underlying logic to this complexity. With the theory of natural selection, they found a single explanatory framework to tie it all together. This made biology more coherent and easier to reason within: predictions could be made from first principles, rather than recalled from memory. 

Excessive complexity is often a sign that we have the wrong model because good models explain more with less. When a model is powerful, it compresses many observations into a few simple rules. When it is wrong, the rules often multiply as we patch over anomalies rather than resolving them.

For centuries, astronomers believed that everything orbited the Earth. But as observation techniques improved, this claim created problems: planets weren’t moving in smooth circles across the sky. To explain these anomalies, astronomers added layers of epicycles: circles on top of circles, shifting centres, changing speeds. When Copernicus proposed that the Earth orbited the Sun, and Kepler refined this with elliptical orbits,a much simpler system emerged with fewer rules.

This is also why outsider breakthroughs do not benefit from classical ‘expertise’. Copernicus didn’t need a thorough understanding of epicycles; his new system made these obsolete. Similarly, Francis Crick did not help discover DNA’s double helix structure by developing a deeper mastery of biochemical techniques than his contemporaries. Instead, he imported conceptual frameworks from physics, the discipline in which he was trained, reframing heredity as a problem in structural information.

Paradigm shifts are enabled by the right conditions

The frequency of paradigm shifts is not fixed. They become more common when people have the freedom and resources to create unconventional ideas and follow them through. While this depends on many conditions, including social tolerance, political stability and economic growth, one enabler is an intellectual ecosystem that makes room for outsiders.

Between 1400 and 1600, Florence produced extraordinary scientific and technical innovations across engineering, mathematics, anatomy, optics, and astronomy. These achievements were enabled by a civic culture that encouraged polymaths and boundary-crossers, underpinned by sustained patronage from the Medici, unusually high urban literacy, a rapid uptake of printing, and guild-based institutions that blurred boundaries between art, science, and engineering.

In the twentieth century, surges of scientific innovation have often aligned with moments when societies faced urgent goals (war, reconstruction, the space race) that demanded pragmatic results. To move quickly, managers were often allowed to back promising ideas directly and build teams from wherever skill could be found, with less emphasis on prestige or academic discipline. This made it easier for outsiders to get involved in roles where they could have real influence.

In the United States, the wartime Office of Scientific Research and Development gave managers the authority to fund projects directly. These managers assembled cross-disciplinary teams of academics, industrial scientists, and independent inventors, with clear objectives and the autonomy to pursue them. Its Cold-War successor, DARPA, kept the core design of program managers with wide discretion to back unusual ideas and bring in unconventional talent. These institutions seeded inventions such as radar, the early Internet, and stealth technology.

Similar models emerged globally. Britain’s Medical Research Council and West Germany’s Max Planck Society both created full-time research institutes with stable, long-term funding and gave directors the freedom to hire across disciplines and borders without university or committee approval. Again, these institutes were associated with breakthroughs such as monoclonal antibodies and the quantum Hall effect.

There is some evidence that paradigm shifts have become less frequent in recent decades. One way to track this is by measuring the number of distinct research topics introduced in scientific paper titles relative to the total number of title phrases. This measure shows sharp increases after World War II and in the early 1960s, followed by a long plateau and even a slight decline over the past 20 years. Since new topics often signal emerging frameworks, this decline may reflect a slowdown in the arrival of fundamentally new ideas.

Another way to spot paradigm shifts is by looking at how often new work displaces old. When fields are being reshaped by fresh ideas, one would expect citations to shift toward recent papers and away from older ones. But since the 1960s, the share of citations made to papers published in the last 5 years has been falling. Since the mid 1990s, inventors have also cited fewer recent papers when filing new patents.

How to bring outsiders back in

If outsiders have fewer spaces to work, the obvious question is how to create more.

Progress accelerates when more people from various backgrounds work on the problems they’re most eager to solve. This means supporting researchers to move across fields; a small number of programs like the Simons Foundation’s Pivot Fellowship do this, but such initiatives remain rare. It also means funding for younger scientists to gain quick independence (as the Arc Science Fellows program provides) and for independent researchers outside academia (for example, with ACX Grants and Emergent Ventures).

Another approach is to open up problems through competitions. These break complex problems into clear challenges, providing data, tools, and rewards to entice a wide range of people to take part. For example, in the recent Vesuvius challenge, three students with no background in papyrology were able to use AI to decipher fragments of a rolled-up 2,000-year-old scroll that could only be viewed via X-ray.

A fragment of a carbonized scroll from Herculaneum, deciphered by machine learning and non-experts as part of the Vesuvius Challenge. The scrolls had remained unreadable for nearly two millennia.
A fragment of a carbonized scroll from Herculaneum, deciphered by machine learning and non-experts as part of the Vesuvius Challenge. The scrolls had remained unreadable for nearly two millennia.
Image
Vesuvius Challenge.

A longer-term approach is to build institutes that give unconventional thinkers the freedom to pursue cross-disciplinary ideas. Institutions like Bell Labs, Xerox PARC, the Laboratory of Molecular Biology, and more recently DeepMind, succeeded in part because they were willing to bet on such talent. 

Unconventional work is often missed because attention is scarce. To triage attention, editors and reviewers fall back on easy proxies (like prestige, familiarity, and trends) that are quick to recognize and defend.  While these filters are flawed, the need for triage is real, and any reform must accept this need while finding better ways to identify talent.

One solution is to diversify the range of publishing formats available. Today, the great majority of research is published as journal articles. This format suits polished narratives but often excludes other contributions, including single data points, null results, or replications. A more resilient publishing system would offer a hierarchy of formats matched to the strength of the evidence: registries for replicated findings; pre-registered reports for well-defined hypotheses; and exploratory briefs for early-stage ideas. This helps focus attention more effectively by making it easier to see what’s uncertain and what’s well-established, using standards of evidence rather than pure prestige. It would also make it easier to challenge existing claims, helping outsiders surface anomalies and, in the long run, improving the overall reliability of the literature.

Some efforts are already moving in this direction. Registered Reports allow researchers to pre-register hypotheses and methods before results are known, reducing publication bias. The Journal of Open Psychology Data publishes standalone datasets. The Reproducibility Project systematically conducts and shares replication attempts. And new platforms like Octopus aim to break up the traditional paper entirely, letting researchers publish single claims, methods, or analyses as discrete, citable outputs.

Technology can support this by easing the attention bottleneck. Smarter tools can effectively expand cognitive capacity: tools like semantic search, citation analysis, and structured summarization such as Semantic Scholar, scite, and Elicit can help surface overlooked work and make unfamiliar ideas easier to grasp. Used alongside new formats and incentives, they can make the filtering of scientific attention both faster and more receptive.

None of this is to diminish the importance of incremental science or specialized expertise, which drive much of our knowledge today. But science flourishes when it cultivates a diverse ecosystem – where specialists refine, generalists cross-pollinate, and outsiders plant new orchards.

More articles from this issue