Issue 01
Words by

The rise and fall of the industrial R&D lab

28th August 2020
19 Mins

For a time in recent history, R&D labs seemed to exist in a golden age of innovation and productivity. But this period vanished as swiftly as it came to be.

Once, small firms centred on inventors were responsible for most of our innovation. Larger firms might buy or exploit these steps forwards, but they did not typically make them. And then for a brief period, this changed: many of the best new products, tools, and ideas came from research labs within large corporations. This brief period also happened to be the era when scientific, technological, and economic productivity sped forward at its fastest ever clip. Yet almost as soon as it arrived, the fruitful period was over and we returned to a situation where small companies and small-business-like teams at universities developed innovations outside of large companies and sold them in a market for ideas. Though we might enjoy the innovation created by small flexible firms, we should not dismiss the contributions made by large corporate labs. The corporate lab may be creeping back, but aggressively prosecuting antitrust against large firms growing organically through in-house research could easily snuff this spark out.

The USA’s first system of innovation

When the USA began to contribute to the progress of technology, in the early 19th century, it was largely practically-minded, not based on deep scientific understanding. Rather, it largely consisted of individual inventors commercialising their own inventions. Whereas nuclear power depended on decades or centuries of progress in physics, many late 19th century innovations were more like the cotton gin, which came together through practical-minded trial and error in the field. By the late 19th century, the system had morphed into one we would find strangely familiar today: inventors invented, venture capitalists invested, and companies commercialised. The system even had patent lawyers and non-practising entities, which own patents purely to litigate on their behalf. There were still startups commercialising an idea and scaling it up themselves, but many inventors found that the division of labour enabled by the market ideas allowed them to focus on what they did best.

Large firms were consumers of ideas created by inventors, and were sceptical of the value of doing in-house science. They believed it was easier to buy new science off the shelf. In 1885 T. D. Lockwood, head of American Bell Telephone Company’s patent department, said: 

I am fully convinced that it has never, is not now, and never will pay commercially, to keep an establishment of professional inventors, or of men whose chief business it is to invent

Of course, Bell Labs itself later grew to be one of the marquees of commercial labs—in the late 1960s it employed 15,000 people including 1,200 PhDs, who between them made too many important inventions to list, from the transistor and the photovoltaic cell to the first digitally scrambled voice audio (in 1943) and the first complex number calculator (in 1939). Fourteen of its staff went on to win Nobel Prizes and five to win Turing Awards.

The 1920s stock market boom was in large part driven by a huge rise in the value that investors accorded to intangible capital and ideas held within companies. A similar thing happened in the 1990s Dot-com Bubble. Between 1921 and 1927 the number of scientists and engineers in industrial labs more than doubled. When the stock market crashed and the Great Depression hit it caused a massive and persistent decline in independent inventing and the startup-like activity around it. But large labs continued to boom, increasing staffing and research spending throughout the lean 1930s, and earning more patents. By 1930, most patents were issued to large firms, rather than independent innovators, and this gap only widened into the 1950s. The industrial lab had become king.

Why labs work well

The question of why the industrial lab works is a microcosm of the question of why the firm works in general. Economist Ronald Coase, who won the Nobel Prize in 1991 (and who lived to be 102) bookended the most productive period of his career with two insights about transaction costs. The first, published in 1937 and entitled “The Nature of the Firm”, tells us why firms exist. In economics, situations are typically approached from the perspective of competitive behaviour in open markets. Most of the things we buy are from open, competitive markets like these. But when we sell our labour, we are usually bound to a single “buyer”—our employer—for an extended period of time for everything we have to offer. If market competition is so efficient, why do we not set up mini-firms for every instance of cooperative work and receive pay in line with our output? Why do we instead generally sell a promise to do whatever our boss says within limits for certain hours of the day for months or years in advance?

The other, “The Problem of Social Cost”, reads like a reflection of the first. Published in 1960, it spawned the so-called Coase Theorem, which holds that so long as transaction costs—the costs of interacting with other individuals or institutions, such as the costs of drawing up and enforcing a contract—are low, people will contract to deal with the problems emerging from positive and negative externalities, like the benefits of a new park to a neighbourhood or the costs of pollution. Where transaction costs are too high, institutions and policies are needed to deal with the externalities instead.

Coase answers the question of why businesses exist as he answers the question of why people simply don’t contract away all externalities: transaction costs. If there is a cost on both sides every time one person wants to pay another for a task, then some tasks will not be worth paying for, or worth doing at the going rate. Concretely: an employer who does not negotiate contracts for each and every individual work unit can afford to offer higher wages, and an employee who does not do so can afford to accept lower wages. Companies organized in this way thus outcompete pure market organization—in cases where transaction costs are large.

In many ways, this general story for why firms exist also explains why large firm R&D labs were so successful. The transaction costs of collaboration are extremely large, and prevent all sorts of potentially-valuable crossovers: not just the financial costs of contracting with others, but the costs of finding people you work well with, of corresponding and collaborating with people far from you, and so on. University lecturers collaborate more with those in their department than in other departments, and more with those in their university or city than elsewhere, despite all the tools and technologies the internet has brought. Chance meetings are another driver of serendipitous discovery and unexpected but fruitful collaborations. 

What’s more, without great efforts in ‘translation’, many scientific ideas can be completely disconnected from practical applications. A research lab brings an array of scientific experts together from different disciplines, for collaboration and drawing on expertise at low cost. Fellow researchers bump into one another. And the firm context means the potential impact in terms of usable products is always taken into account.

The DARPA Era

The era of the R&D lab has one particularly legendary story and paradigmatic example: PARC. Xerox’s Palo Alto Research Centre—the location in Palo Alto, now the home of companies like Tesla, Palantir, and Google, is no coincidence—developed many of the foundational building blocks of today’s technology and economy. In the 1970s, PARC researchers built the first computer with a graphical user interface, the first laser printer, the first Ethernet cable, and the first user-friendly word processor. Steve Jobs visited PARC in 1979, aged 24, and incorporated many of the ideas into Apple products. Charles Simonyi, a key developer at PARC, moved to Microsoft, where he developed the Office suite. But Xerox itself, which is still largely known for making photocopiers, did not capitalise on these inventions.

PARC, in turn, had hired many of its workers from Augmentation Research Centre, a publicly-funded project which pioneered the computer mouse, hyperlinks, the earliest predecessor to the internet, and many smaller innovations we take for granted in today’s computer ecosystem. ARC was funded by ARPA, the Advanced Research Projects Agency, now DARPA. Though DARPA (then ARPA) is funded by the US Department of Defence, it shares many elements with golden era R&D labs. They are organised around a mission and a goal—even the most basic research is done with an end in mind—but researchers are given a lot of freedom to make their own decisions.

A return to the market for ideas

The scale of the change since the 1970s is huge—big businesses have retreated from research. In the 1960s, DuPont, the chemicals giant, published more in the Journals of the American Chemical Society than both MIT and Caltech combined. R&D magazine, which awards the R&D 100 to the hundred innovations it judges most innovative in a given four year period, gave 41% of its awards to Fortune 500 companies in its 1971 iteration and 47% in 1975. By 2006, 6% of the awards were going to firms in the Fortune 500. The great majority of these awards are now being won by federal labs, university teams, and spin-offs from academia. The lone inventor is back.

This is reflected by declines in the shares of patents going to the biggest businesses, and in the shares of scientists working there. In 1971 just over seven per cent of scientists in industry tracked by the US National Science Foundation worked in firms with under 1,000 employees; by 2004 this was 32%. In 2003 around a quarter worked at firms with fewer than ten employees. Even pharmaceuticals, the one area where large internal research labs are still significant, has been affected—around half of the drugs approved so far in the 2010s were originally discovered by small biotech startups.

In general, participation by large American firms in scientific research declined after the 1970s and 1980s. Newcomers who did little research entered the market; large firms let their labs run down. Scientific research became less valuable in firm valuations for the purposes of mergers and acquisitions. For those doing R&D, the number of publications per firm fell at 20% per ten years between 1980 and 2006.

In the absence of large firm innovations we now have an innovation system where start-ups and small teams, whether private sector or academic, do most early stage innovation. These teams then sell their work to larger ventures, enabled by the patent system, are acquired wholesale, or more rarely scale up, funded by venture capital, to become large businesses themselves. Like in the first major era of commercial invention and science, a large patent industry has grown to adjudicate claims, and get around the key problem of contracting in a knowledge economy: to reveal your ideas without protection is tantamount to giving them away for nothing.

Why labs failed

No one is quite sure why the lab model failed. It’s obvious that a scenario where Xerox is paying scientists to do research that ultimately mostly benefits other firms, potentially even competitors that help to put it out of business, could never survive. Similarly, the tension between managing scientists with their own pure research goals in such a way that they produce something commercially viable, while still leaving them enough latitude to make important leaps, seems huge. But these problems were always there in the model. What is harder to identify is an exogenous shock or set of shocks that changed the situation that existed from the 1930s until somewhere between the 1960s and the 1980s.

One possibility is antitrust enforcement. From 1949 authorities pursued a case against AT&T’s Bell Labs, which ultimately resulted in the forced divestiture of their non-telecoms arms, separation from their vertically integrated manufacturing, and compulsory no-fee licensing of all 7,820 of its non-telecoms patents (1.3% of the total stock of patents in force in the USA at the time). There is evidence that this move rippled across the US economy, providing a foundation for many of the great innovations of the next fifty years. But this would be true of almost any mass patent invalidation: the monopoly restrictions of patents once they are granted are the cost we pay for the investment in innovation that came before.

As well as spurring innovation outside, as a one off, this move likely had a chilling effect on innovation in big firms’ R&D labs. Later enforcement actions, such as the 1974 suit that eventually led to AT&T’s 1982 breakup, would have pushed in the same direction. These actions reduced the incentive to generate precisely the game-changing general purpose technologies that we want. They did this by creating a risk that if you did go from zero to one and manage to gobble up the whole marketplace, you’d have that taken away. What’s more, they reduced the size, scope, and vertical integration of firms—and all of these mean that innovation spills over more, and is captured less by the firm. If antitrust means that large, extensive businesses like AT&T are more likely to be broken up, there is less value in research that can only be captured by businesses that are or can become large and extensive.

On the other hand, Prof Ashish Arora and colleagues argue that antitrust enforcement restricting acquisitions, as opposed to organic growth, may have had the opposite effect. Businesses that cannot acquire innovation from the outside, and face a higher risk of enforcement actions if they grow through M&A, may feel as though the only low risk option for expansion is organic growth. They may even feel as though the high status and clearly socially useful activities embodied in quality basic research act as a sort of insurance against aggressive antitrust activity. They believe that this may be operating right now: rather than Google and Facebook’s investment in technology being a cause of their so-far-sustained dominance in their main markets; it is a defensive result of their dominance.

Another possible answer is that non-policy developments have steadily made spillovers happen faster and more easily. Technology means faster communication and much more access to information. An interconnected and richer world doing more research means more competitors. And while all of these are clearly good, they reduce the technological excludability of new ideas, even if legal excludability hasn’t changed, and so may have reduced the returns to innovation by individual businesses. Whatever the cause, ideas do seem to reach competitors much more quickly and completely now, and this does seem to be weighing on the incentives for research.

The Great Stagnation

Alongside all this, it seems like progress has been slowing down since the age of the lab, regardless of how we measure it. This may be coincidental. It’s possible that the fact that the lab era looks so good is because of when it came. The post-war era, roughly 1945-1973, was special in many ways. It may be that the other features of this era caused rapid growth and scientific progress that was always going to tail off eventually. We shouldn’t feel bad for failing to live up to this special era of rebuilding and new ideas, and neither should previous generations.

Still, the change is pretty notable. Robert Gordon points out that American GDP per hour grew 1.79 per cent per year between 1870 and 1920 and 1.62 per cent per year between 1970 and 2014 (the figure is similar extended out to 2020). Between 1920 and 1970 productivity growth was 2.82 per cent per year. My paper with Tyler Cowen finds a similar trend for most measures of technological and scientific progress. The number of researchers needed to develop a new idea is growing, the rate of major innovations is falling, and economic productivity is going up more slowly. Simple metrics like aeroplane engine power, crop yields, life expectancy, height, and computer processing speed are not keeping pace with past rates of increase. There was steady progress from the industrial revolution until the mid 20th century, when growth was at its fastest ever ebb; progress has slowed down since.

Why universities and startups haven’t solved the problem

In many ways, an innovation system based around an open market for ideas, with the division of labour between specialised firms, rather than specialised teams within firms, is attractive. It seems obvious that small new ventures can be more flexible and adapt to new situations as they arise, and perhaps come up with new ideas more rapidly than big incumbents. But there are several reasons why our current model may not be delivering.

One is that disintegrated businesses have less incentive to research general purpose technologies. One famous estimate finds that in the long run society at large captures 98% of the value of new innovations, and the innovator 2%. This implies that, by themselves, businesses will not do as much research as is optimal from society’s point of view.

But large vertically integrated companies may have just enough to make it worthwhile, because they can hold on to and use more of the benefits of new discoveries that smaller firms would not be able to capture, even with robust intellectual property protections. For example, since most microchip companies split into fabless chip designers and fabs, the remaining integrated chipmakers work on more systematic innovations—as opposed to merely optimising the efficiency of the existing frameworks they work under. And spillovers are not pure costs if the businesses doing the discovery can still use the discoveries too. AT&T supported Claude Shannon, the father of information theory, to do pure mathematics work, since even though most of this would spill over, any increase in communications knowledge would benefit them so much. IBM supported nanoscience research with limited immediate practical usefulness due to a belief that it could help them benefit from any revolutionary shift in chip design. 

Labs, compared to university researchers, also maintain a constant link with delivering value and, ultimately, profitability. University incentive, prestige, and funding regimes suffer from the standard problems around non-profits: if you’re not trying to make profits, what are you trying to do? How do you know that what you’re doing is socially useful? Working without a profit signal can lead to deeply broken incentives systems and extremely wasteful admin burdens that weigh heavily on productivity, along with research that is in no way tied to improving human lives. Some estimates suggest that university scientists spend just a third of their time on active research. Historically, labs have seemed less prone to this problem.

Labs also have more incentives towards multidisciplinarity. Since researchers are trying to achieve concrete goals there is less of a tendency towards the hyper specialisation and status competition of universities. Startups may be unable to generate funding in advance for many different types of researcher to work together before they have already had success—and ideas may not be easily separable into single patents they can sell on. Historical success with labs has involved blends of different expertise, for example the team of physicists, metallurgists, and chemists who developed the transistor at Bell Labs.

Google and Big Tech: The new industrial lab

There is a promising spark of big lab activity. Big Tech firms are investing heavily in machine learning, neural nets, and other artificial intelligence research. Google employs 1,700 AI researchers who write more highly collaborative, better-cited papers than university authors, work with more expensive and advanced equipment, and use bigger datasets. Large firm publications in machine learning are then feeding into the patents of other firms, and thus spilling over to society in general. Google X funds ‘moonshot’ high-risk, high-reward, ideas, including Google’s self-driving car and other projects like Google Glass and balloon internet for rural areas.

Prof. Arora and collaborators think this return to R&D is driven by fears of a new wave of anti-tech antitrust enforcement: Google and Facebook invest in research because buying it through acquisitions has become more difficult legally. But the case for the opposite is just as strong: they attract antitrust ire because their internal investment has paid off and they have taken huge shares of various markets on the back of it. In this opposite story, a small recent return to R&D labs would come down to the long term effects of the relatively weaker antitrust enforcement seen since the 1980s. 

Smashing up today’s big businesses in a reenactment of the decisions that brought down the Bell system would be likely to have similar effects now as it did then. Their large scale and scope, and relative confidence that they will be able to benefit from technologies they develop at least somewhere in their firm is a key reason they spend so much time innovating.

It looks likely that America will continue to drive the world’s research for a considerable period. American innovation is likely to go along one of two paths – or perhaps a blend of the two. Perhaps antitrust bodies will be restrained, and we will see the return of various large in-house labs. This, combined with the increased scientific contribution of India, China, and other rising powers, could see productivity speeding ahead at the clip of the 1960s once again.

More articles from this issue

The evolution of psychiatry

Words by Adam Hunt

Modern psychiatry appears to be at a standstill, wanting for better treatment and a substantive theoretical framework. Evolutionary theory has the potential to reinvigorate the field.

Read more
Science