Nanotechnology sometimes sounds as much like science fiction as artificial intelligence once did. But the problems holding it back seem solvable, and some of the answers may lie inside our own bodies.
The usual way of thinking about the Great Stagnation is as a faltering of preexisting trends. From 1920 to 1970, US total factor productivity grew at a rate of two percent per year – since then growth has fallen by more than half. From 1800 to the late 1970s, American energy consumption per capita grew at around the same rate, a phenomenon J. Storrs Hall calls the Henry Adams curve. Since 1978, per capita energy use has actually declined. If these trends had not faltered in the 1970s, we might expect energy consumption to be three times as high as it is today, and living standards to be about twice as high as today’s.
Yet what if our ambitions should go beyond restoring pre-Stagnation trends? Storrs Hall devotes a portion of his book Where Is My Flying Car? to one technology so potent that, if the hype can be believed, it would leave past trends in the dust. It’s called nanotechnology.
As Storrs Hall relays in the book, he and his friend Robert Freitas each independently and simultaneously estimated that mature nanotechnology could replicate the entire capital stock of the United States – ‘every single building, factory, highway, railroad, bridge, airplane, train, automobile, truck, and ship’ – in a mere week. If you take the value of the capital stock at roughly $80 trillion and ignore compounding, that would correspond to a US GDP of four quadrillion dollars per year, around 160 times as high as it is today. Is such a thing possible?
Nanotechnology is the production of complex systems using individually placed atoms or small molecules as building blocks. In Storrs Halls’s vision, it would enable physical materials to have almost magical properties. If you want not only a flying car, but one that can fold up into George Jetson’s briefcase, or for furniture to seemingly materialize out of thin air when needed and to disappear when not wanted, nanotechnology is for you. Yet the emphasis must be placed on almost magical. Storrs Hall insists nanotech is grounded in reality. ‘Absolutely standard physics, quantum mechanics, and chemistry’, he says, ‘predict that certain arrangements of atoms will behave as workable machine parts, and our existing knowledge in these disciplines allows us to calculate the performance of machines so built.’
If nanotechnology as envisioned by Storrs Hall is possible, then it makes a mockery of claims we have ‘picked the low-hanging fruit’ of economic progress. The future of industry could be radically different than it is today. In a decade or two, with the proper urgency, claims Storrs Hall, we could have unfathomably high living standards. The promise of nanotechnology may be hard to believe in, but it’s at least worth an investigation.
The Feynman path
In 1959, eminent physicist Richard Feynman delivered a talk entitled ‘There’s Plenty of Room at the Bottom’ to the American Physical Society. With characteristic flair, he captivated his audience. Although Feynman spoke apparently without notes, an admirer had lugged a tape recorder to the talk, and so we have today a transcript of it, stripped by some humorless scribe of the jokes that often infuse Feynman’s lectures.
Feynman’s talk was aimed at getting the physicists he was addressing to appreciate the opportunities afforded by smallness. He begins with what today seem like rather mundane thought experiments, like writing the contents of the Encyclopedia Britannica on the head of a pin. He exhorts his colleagues to develop a better electron microscope. Then he develops a few intriguing possibilities – if only we could manipulate atoms directly. At the height of his talk, Feynman considers the possibility of getting there using a simple recursive process. Imagine a set of robot hands that can be controlled through the movement of one’s own hands. Suppose these robot hands could use suitable machine tools to make robot hands and tools that are one fourth the size. Once these smaller hands exist, substitute them for the full-size robot hands, and use them to make a yet smaller 1/16-scale version. Repeat continuously, until you are handling individual atoms.
Is the development of more-precise from less-precise tools possible? Assuredly so. After all, our most precise tools today had to come from somewhere. Indeed, you could think of industrial civilization itself as a steady climb from crudeness to staggering precision. Watt’s 1776 steam engine improved on Newcomen’s design in several ways, but a critical one was the adoption of a precision boring device invented by Englishman John Wilkinson to produce the cylinder. The tolerance of one tenth of an inch, or perhaps better, was unheard of at the time.
Simon Winchester, the author of The Perfectionists, a delightful history of precision engineering, reckons that our pinnacle achievement in this domain so far is the LIGO experiment. Short for Laser Interferometer Gravitational-Wave Observatory, LIGO is an instrument designed to detect and study gravitational waves. LIGO initially failed at this task. In 2015, coming online after a years-long overhaul, it made the first detection of gravitational waves emanating from a merger of two black holes 1.3 billion light-years from Earth. LIGO’s precision is equivalent to inferring the distance between Earth and Alpha Centauri A, roughly 26 trillion miles, to within the width of a single human hair.
That we got to LIGO somehow, starting with much cruder tools, proves that it is possible to bootstrap precision manufacturing from an imprecise starting point. Why should we stop at the capabilities of 1959 or 2022? The logical endpoint of our ascent into precision is to assemble things atom by atom. As far as we know, there is nothing beyond atomic precision, no way to harness subatomic particles to create new, exotic, yet useful forms of matter.
For all his genius and foresight, Feynman had only a glimpse of what might be possible with advanced nanotechnology. Much of his smattering of ideas is not very inspiring to someone living in 2022. Some we have already achieved by other means – highly advanced microfilm for information storage, a computer so powerful that it could recognize faces. Others are vague, hand-wavy suggestions – a mechanical surgeon that you swallow as a pill that enters into your bloodstream and examines your heart valves. It took others to draw out the full economic implications of atomically precise manufacturing.
The Drexler path
In the 1980s, a young scientist named Eric Drexler came up with another possible path to nanotechnology. Instead of starting with today’s machine tools and increasing the precision, start with existing nanoscale biology and make it more machinelike. After all, biology itself is an irrefutable proof of concept of the possibility of complex systems built up from individually placed molecules. Consider the ribosome, a tiny molecular machine present in every cell made up of two wads of proteins and RNA. The ribosome manufactures proteins. Your body contains a sextillion or two proteins, and each one was extruded from a ribosome, so we’re talking high-volume production here. The proteins themselves are usually molecular machines or parts thereof. Drexler noted in his first, 1981 paper on molecular engineering (he did not yet use the term nanotechnology) that several proteins serve functions in the body directly analogous to machine parts, including motors, actuators, cables, driveshafts, pumps, conveyor belts, and production lines. If you could design proteins directly, you could engineer a molecular machine.
Protein design is clearly within the realm of the possible, which we now know because it has been done. Proteins are simply chains of amino acids, which are organic molecules that can be linked together. What it means to design a protein is to specify the order in which the amino acids should appear in the chain. Once the designer has determined the desired order, she could synthesize the protein on a tabletop device or encode it as genetic material and give it to a living ribosome (this is what the mRNA vaccines did). Ribosomes produce proteins much faster than our state-of-the-art lab equipment – human ribosomes can bond about two amino acids per second (and bacterial ribosomes are yet faster), while tabletop reactions happen at a rate of one bond every 2.5 minutes. This superior performance in an apples-to-apples task is just one indication of how much more efficient molecular machinery can be compared to bulk chemistry.
Drexler was acutely aware of potential obstacles, including a key limitation of biological machines: They only operate inside the watery environment of the cell. This places strict temperature and pressure constraints on the process such a molecular machine might perform – if the water in the cell turned to ice or steam, the machine would stop working. Drexler reasoned that protein-based machines could be used to create a second generation of nanotechnology. Instead of being based on proteins, which are one-dimensional strings of amino acids that only happen to fold into useful three-dimensional shapes, the second generation could be fully three-dimensional from the get-go. They could be made from inorganic materials that would operate fine without water, possibly even in a vacuum.
In 1986, Drexler published Engines of Creation, a popular book based on his 1981 paper and subsequent research, and the first book to use the term nanotechnology. Here, he sketched out more of the details and applications of manufacturing at such a scale. To explain how production of macroscale objects could occur, he suggests that a rocket engine could be ‘grown’ in a large vat, essentially a bioreactor. He argues that nanomachines will be able to swim around inside the brain to explore how it works, and, armed with this information, engineers will be able to make nanoelectronic computers that mimic this behavior (only faster), heralding an age of true AI. In space, nanobots could be used to make light sails, spacesuits that feel like a second skin and have magical properties, and vast habitats made out of raw materials from asteroids. In the medical domain, nanomachines could cure disease, repair cell damage to forestall aging, and induce a kind of biostasis to enable freshly expired (i.e., barely dead) patients to resist decay until better repair mechanisms might be able to revive them. The ability to fully manipulate matter at the molecular and atomic level would enable us to build and do, if not anything, whatever we can imagine that is possible within the bounds of the speed of light, the second law of thermodynamics, and other limits described by physics and chemistry.
The nanoengineers
Engines, written in a style that is somehow both breathless and discursive, catalyzed intense interest in nanotechnology from a tiny group of devotees. While prior work on nanotechnology was light on certain details, efforts now turned toward concrete engineering focused on designing molecular machines themselves. One of the scientists involved, J. Storrs Hall, cofounded a company called Nanorex that produced an open-source CAD tool for designing nanomachines. In 2005, Storrs Hall published Nanofuture, which is today perhaps the best book-length introduction to nanotechnology. It even has a chapter on flying cars.
People quickly worked out the differences between macroscale machines and molecular ones. For one, at atomic scale, there are no flat or smooth surfaces. Atoms are lumpy. Just as when you zoom in on a digital image you encounter the pixels of which it is composed, when you scale a machine part enough you will have to grapple with the round, physical atoms of which all matter is composed. At atomic scale, circular wheels and flat surfaces are impossible – all that exists are configurations of round atoms. Designs have to account for atoms’ lumpiness. Another challenge is the hardness of structures. At atomic scale, most of the materials that we consider rigid are positively floppy. We make airplanes out of aluminum because it is rigid, but not out of aluminum foil, which is about 60,000 atoms thick. A structural member made out of aluminum, say, five atoms in diameter would be next to useless. The solution is to use diamond, which will still be rubbery but serviceable at atomic scale, for structural applications. And why not? When we can mechanically place abundant carbon atoms in a diamond crystalline structure, diamond will be one of the cheapest options.
Nanoelectronics raises its own issues. One is that electrical resistance is inversely proportional to the diameter squared of a wire. A hypothetical wire that is one nanometer thick will have resistance a trillion times higher than a one-millimeter wire. To account for this property in the almost impossibly thin wires of nanomachinery, it may be necessary to make them out of superconducting material, which has no electrical resistance at all. Another problem is that wires can’t run too closely together. Electrons can easily ‘tunnel’ up to three nanometers – that is, they can blink out of existence in one place and reappear up to three nanometers away if the hop is energetically favorable. If wires are closer to one another than that, then an electron flowing down one wire could tunnel to another wire, causing a short circuit or another problem. For systems like nanocomputers that require a lot of closely packed circuitry, more spacing will have to be introduced than implied by the size of the components. Alternatively, nanocomputers could be designed to be mechanical instead of electronic, transmitting information via vibrations or some other method.
Another consideration for nanomachines involves lubrication. Basically, they don’t need it. Oil molecules themselves are large relative to nanomachine parts, so they can play no role in helping the parts slip by one another. What’s needed to prevent too much heat buildup is judicious selection of atomically slippery materials, perhaps graphene or nanotubes. There is no wear and tear on the machine because atoms cannot wear out, and no grit can enter the system to displace the atoms. On the other hand, nanomachines will either have to be designed with a great deal of redundancy or with self-healing properties, because they are highly vulnerable to cosmic radiation. If a cosmic ray hits your car and displaces an atom, it is unlikely to matter, but in a nanomachine, every atom counts. We already deal with this effect in computer memory, which has error-checking capabilities to account for occasionally flipped bits.
Nanomachines have another interesting property relating to angular speed. Because the distance a rotating piece of material extends is so much smaller than an analogous macro-scale part, the tension on the piece will be much smaller per unit of cross-sectional area. This means it can rotate faster. Nanomachines, therefore, are blazingly fast. A motor featured in Drexler’s Nanosystems textbook operates at 48 billion rotations per minute. It is electrostatic rather than electromagnetic, has a power density greater than 1015 W/m3, and is almost perfectly efficient. If scaled to power a Tesla Model S Plaid, 700 billion such motors would occupy a volume equivalent to fewer than 12 grains of sand.
How should we supply energy to the motor? How about a fully reversible hydrogen power mill? This device would take as input two hydrogen atoms and one oxygen atom, and output electricity and water, similar to today’s hydrogen fuel cells. Unlike fuel cells, because it is handling the atoms directly, it could be thermodynamically reversible and fully efficient. Our Model S Plaid could swap its battery for 380 grams of power mills and a tank filled with 3 kilograms of hydrogen. The whole system would weigh about 7.5 pounds, although a bit more weight would be needed to account for a strong hydrogen tank, another item nanomachines could help build efficiently. Because the power mill is reversible, it could also do electrolysis, converting electricity and water to hydrogen and oxygen. Since the device is perfectly efficient in both directions, it would essentially solve energy storage problems.
These car-based examples are somewhat absurd because, with true nanotechnology, it’s not clear what cars would even look like anymore. Does it make sense to use sand-sized motors to power our existing wheels? If cars still moved on the surface, they could smoothly scoot about on trillions of little feet, but maybe they should just fly. Everything would need to be reconsidered to take advantage of our new capabilities. If we could, as Storrs Hall and Freitas estimate, use advanced nanotechnology to rebuild the entire capital stock of the United States within a week, it is doubtful that we would do so to the same specifications. True, mature nanotechnology would transform everything.
What counts as nanotechnology?
For a moment, it looked like the world was going to take nanotechnology seriously. In 2000, Bill Clinton proposed and got passed a National Nanotechnology Initiative (NNI). By all accounts, in the planning phase of the initiative, the proponents got it. A planning document noted that ‘the essence of nanotechnology is the ability to work at the molecular level, atom by atom, to create large structures with fundamentally new molecular organization’. The legislation authorizing the initiative expressed a similar sentiment. Yet when the NNI got going, it redefined the field: ‘Nanotechnology is the understanding and control of matter at dimensions of roughly 1 to 100 nanometers.’
Under the NNI’s new definition of nanotechnology, much more mundane research qualified for funding. Nanotechnology now included items like ultrafine powders used for coatings or cooked into other materials. Anything with nanotubes, even if they are created using bulk synthesis instead of atom by atom, counts. Some have speculated that the NNI was scared off from working on real nanotechnology due to dystopian fears about ‘gray goo’, a term Drexler now wishes he had not used in Engines, referring to uncontrolled replicating nanomachines outcompeting bacteria and plants and thus obliterating life on the planet. Others suggest a more banal reason for the change: Once the government started handing out money for nanotechnology, the powders and nanotube people rushed to have their products labeled nanotechnology. Storrs Hall calls this the Machiavelli Effect. The NNI still exists; its website is nano.gov.
What about computer chips? Like ultrafine powders and nanotubes, transistors are less than 100 nanometers across. Unlike those applications, transistors are part of a vastly more complex product and manufactured at the bleeding edge of our technical capability. So should computer chips be counted as nanotechnology? The answer is still a resounding no. Semiconductor fabrication is still based on the manipulation of atoms and molecules in bulk – it is ‘bulk technology’ in the parlance. Today’s transistors have tens of thousands of atoms in them. They will count as nanotechnology only when they are manufactured by manipulating individual atoms or molecules, and they will be far smaller and more performant.
Real nanotechnology computer chips would also be cheaper. According to one estimate, state-of-the-art transistors cost ten billion dollars per kilogram! Silicon, from which the transistors are made, is the second-most-abundant element in Earth’s crust and can be purchased for under two dollars per kilogram. Our transistor-manufacturing technology, therefore, imposes a five-billion-fold markup on the raw materials price for silicon. A hallmark of true nanotechnology, on the other hand, is abundance. The self-replicating nanomachines building nanotransistors would themselves be almost free, the energy needed to operate them would be negligible, and, even if manufactured commercially, the markup on computer chips over raw-materials cost would tend toward zero.
Alternatively, manufacture it yourself. In Neal Stephenson’s The Diamond Age, even the poorest families own a household appliance called a ‘matter compiler’, which, when fed with raw material feedstock, can produce a bewildering variety of goods. When you are done with the item, put it in the decompiler, and its atoms will be recycled. This concept might be considered the endgame for nanotechnology, the ultimate goal. Even if we start to make progress, it will be some time before we achieve it. There will surely be intermediate advances that should count as nanotechnology. Yet given the history concerning the National Nanotechnology Initiative, we should endeavor to reserve the term for activities that at least involve manipulating individual atoms or small molecules while building up complexity.
Say goodbye to stagnation
Is nanotechnology our path out of the Great Stagnation? The case that some form of atomically precise manufacturing should work is theoretically strong. We have a pretty solid proof of concept: biology. Every living thing is made up of vast numbers of molecular devices, so how can such molecular devices be impossible? Furthermore, we know that it is possible to develop more precise tools from less precise ones: That is what human civilization itself has done, starting with sticks and rocks and progressing to steam engines, interchangeable parts, jet engines, and extreme ultraviolet photolithography. Why should we stop now? And nothing that nanotechnologists propose seems to go against any law of nature. ‘If something is permitted by the laws of physics’, says David Deutsch, ‘then the only thing that can prevent it from being technologically possible is not knowing how.’
And yet, if some nanotechnological possibilities seem a little hand-wavy to you, you are not alone. Many details remain to be worked out. It is important to remember the limits of proofs of concept. Before the Wright brothers, birds were a proof of concept for heavier-than-air flight. But airplanes operate on different principles than birds, using engines for thrust and wings for lift, while birds use wings for both. Ornithopters – flying machines that operate like birds – have been built, but they are far less capable. Bird-style flight, it turns out, works best for bird-sized objects. It may turn out, therefore, that nanotechnology will be subject to similar hidden limitations of which we are not yet aware. We may have to create our abundant future according to different principles.
But it is worth a shot. Mature nanotechnology would solve climate change, end hunger and poverty, create unfathomable abundance, enable large-scale space colonization, and, yes, even make flying cars practical. It would end the Great Stagnation with plenty of margin to spare. Even if nanotechnology had a mere one percent chance of only doubling world GDP – and both of these numbers seem dubiously small – it would be worth about a trillion dollars to accelerate its arrival by one year.
A serious effort to develop nanotechnology need not cost that much. There are multiple nanotechnology roadmaps that have been developed, including by the Foresight Institute, the National Academies of Science, and Adam Marblestone. A particularly promising task, supported by Marblestone and others, suggests itself from recent progress in what is known as DNA origami, the use of DNA molecules for their structural rather than informational properties. Scientists are getting continually better at folding DNA into arbitrary shapes, including in three dimensions. The idea is to create a molecular 3D printer using DNA for the scaffolding and moving parts, and synthetic molecules called spiroligomers as Lego-like bricks that the printer could lay down.
Storrs Hall estimates that we could have real nanotechnology within a decade if we pursued both the top-down Feynman path and the bottom-up Drexler path simultaneously, meeting in the middle. Can you imagine? Within our lifetimes! While that timeline seems a bit spotty, I agree unreservedly that it should be tried. The fact that we have dallied this long itself says something about the nature of our economic stagnation. Storrs Hall calls our reluctance to even try to advance nanotechnology, in a term he attributes to Arthur C. Clarke, a failure of nerve. He’s right – nerve may be the scarcest factor of all.