Nanotechnology — A Series of Papers on the Subject – Audio and Text

Spread the love



An Overview of Nanotechnology
Adapted by J.Storrs Hall from papers by Ralph C. Merkle and K. Eric Drexler


Nanotechnology is an anticipated manufacturing technology giving thorough, inexpensive control of the structure of matter. The term has sometimes been used to refer to any technique able to work at a submicron scale; Here on sci.nanotech we are interested in what is sometimes called molecular nanotechnology, which means basically “A place for every atom and every atom in its place.” (other terms, such as molecular engineering, molecular manufacturing, etc. are also often applied).

Molecular manufacturing will enable the construction of giga-ops computers smaller than a cubic micron; cell repair machines; personal manufacturing and recycling appliances; and much more.


Broadly speaking, the central thesis of nanotechnology is that almost any chemically stable structure that can be specified can in fact be built. This possibility was first advanced by Richard Feynman in 1959 [4] when he said: “The principles of physics, as far as I can see, do not speak against the possibility of maneuvering things atom by atom.” (Feynman won the 1965 Nobel prize in physics).

This concept is receiving increasing attention in the research community. There have been two international conferences directly on molecular nanotechnology[30,31] as well as a broad range of conferences on related subjects. Science [23, page 26] said “The ability to design and manufacture devices that are only tens or hundreds of atoms across promises rich rewards in electronics, catalysis, and materials. The scientific rewards should be just as great, as researchers approach an ultimate level of control – assembling matter one atom at a time.” “Within the decade, [John] Foster [at IBM Almaden] or some other scientist is likely to learn how to piece together atoms and molecules one at a time using the STM [Scanning Tunnelling Microscope].”

Eigler and Schweizer[25] at IBM reported on “…the use of the STM at low temperatures (4 K) to position individual xenon atoms on a single-crystal nickel surface with atomic precision. This capacity has allowed us to fabricate rudimentary structures of our own design, atom by atom. The processes we describe are in principle applicable to molecules also. …”


Drexler[1,8,11,19,32] has proposed the “assembler”, a device having a submicroscopic robotic arm under computer control. It will be capable of holding and positioning reactive compounds in order to control the precise location at which chemical reactions take place. This general approach should allow the construction of large atomically precise objects by a sequence of precisely controlled chemical reactions, building objects molecule by molecule. If designed to do so, assemblers will be able to build copies of themselves, that is, to replicate.

Because they will be able to copy themselves, assemblers will be inexpensive. We can see this by recalling that many other products of molecular machines–firewood, hay, potatoes–cost very little. By working in large teams, assemblers and more specialized nanomachines will be able to build objects cheaply. By ensuring that each atom is properly placed, they will manufacture products of high quality and reliability. Left-over molecules would be subject to this strict control as well, making the manufacturing process extremely clean.


The plausibility of this approach can be illustrated by the ribosome. Ribosomes manufacture all the proteins used in all living things on this planet. A typical ribosome is relatively small (a few thousand cubic nanometers) and is capable of building almost any protein by stringing together amino acids (the building blocks of proteins) in a precise linear sequence. To do this, the ribosome has a means of grasping a specific amino acid (more precisely, it has a means of selectively grasping a specific transfer RNA, which in turn is chemically bonded by a specific enzyme to a specific amino acid), of grasping the growing polypeptide, and of causing the specific amino acid to react with and be added to the end of the polypeptide[9].

The instructions that the ribosome follows in building a protein are provided by mRNA (messenger RNA). This is a polymer formed from the four bases adenine, cytosine, guanine, and uracil. A sequence of several hundred to a few thousand such bases codes for a specific protein. The ribosome “reads” this “control tape” sequentially, and acts on the directions it provides.


In an analogous fashion, an assembler will build an arbitrary molecular structure following a sequence of instructions. The assembler, however, will provide three-dimensional positional and full orientational control over the molecular component (analogous to the individual amino acid) being added to a growing complex molecular structure (analogous to the growing polypeptide). In addition, the assembler will be able to form any one of several different kinds of chemical bonds, not just the single kind (the peptide bond) that the ribosome makes.

Calculations indicate that an assembler need not inherently be very large. Enzymes “typically” weigh about 10^5 amu (atomic mass units). while the ribosome itself is about 3 x 10^6 amu[9]. The smallest assembler might be a factor of ten or so larger than a ribosome. Current design ideas for an assembler are somewhat larger than this: cylindrical “arms” about 100 nanometers in length and 30 nanometers in diameter, rotary joints to allow arbitrary positioning of the tip of the arm, and a worst-case positional accuracy at the tip of perhaps 0.1 to 0.2 nanometers, even in the presence of thermal noise. Even a solid block of diamond as large as such an arm weighs only sixteen million amu, so we can safely conclude that a hollow arm of such dimensions would weigh less. Six such arms would weigh less than 10^8 amu.

Molecular Computers

The assembler requires a detailed sequence of control signals, just as the ribosome requires mRNA to control its actions. Such detailed control signals can be provided by a computer. A feasible design for a molecular computer has been presented by Drexler[2,11]. This design is mechanical in nature, and is based on sliding rods that interact by blocking or unblocking each other at “locks.” This design has a size of about 5 cubic nanometers per “lock” (roughly equivalent to a single logic gate). Quadrupling this size to 20 cubic nanometers (to allow for power, interfaces, and the like) and assuming that we require a minimum of 10^4 “locks” to provide minimal control results in a volume of 2 x 10^5 cubic nanometers (.0002 cubic microns) for the computational element. (This many gates is sufficient to build a simple 4-bit or 8-bit general purpose computer, e.g. a 6502).

An assembler might have a kilobyte of high speed (rod-logic based) RAM, (similar to the amount of RAM used in a modern one-chip computer) and 100 kilobytes of slower but more dense “tape” storage – this tape storage would have a mass of 10^8 amu or less (roughly 10 atoms per bit – see below). Some additional mass will be used for communications (sending and receiving signals from other computers) and power. In addition, there will probably be a “toolkit” of interchangable tips that can be placed at the ends of the assembler’s arms. When everything is added up a small assembler, with arms, computer, “toolkit,” etc. should weigh less than 10^9 amu.

Escherichia coli (a common bacterium) weigh about 10^12 amu[9, page 123]. Thus, an assembler should be much larger than a ribosome, but much smaller than a bacterium.

Self-Replicating Systems

It is also interesting to compare Drexler’s architecture for an assembler with the Von Neumann architecture for a self replicating device. Von Neumann’s “universal constructing automaton”[21] had both a universal Turing machine to control its functions and a “constructing arm” to build the “secondary automaton.” The constructing arm can be positioned in a two-dimensional plane, and the “head” at the end of the constructing arm is used to build the desired structure. While Von Neumann’s construction was theoretical (existing in a two dimensional cellular automata world), it still embodied many of the critical elements that now appear in the assembler.

Should we be concerned about runaway replicators? It would be hard to build a machine with the wonderful adaptability of living organisms. The replicators easiest to build will be inflexible machines, like automobiles or industrial robots, and will require special fuels and raw materials, the equivalents of hydraulic fluid and gasoline. To build a runaway replicator that could operate in the wild would be like building a car that could go off-road and fuel itself from tree sap. With enough work, this should be possible, but it will hardly happen by accident. Without replication, accidents would be like those of industry today: locally harmful, but not catastrophic to the biosphere. Catastrophic problems seem more likely to arise though deliberate misuse, such as the use of nanotechnology for military aggression.

Positional Chemistry

Chemists have been remarkably successful at synthesizing a wide range of compounds with atomic precision. Their successes, however, are usually small in size (with the notable exception of various polymers). Thus, we know that a wide range of atomically precise structures with perhaps a few hundreds of atoms in them are quite feasible. Larger atomically precise structures with complex three-dimensional shapes can be viewed as a connected sequence of small atomically precise structures. While chemists have the ability to precisely sculpt small collections of atoms there is currently no ability to extend this capability in a general way to structures of larger size. An obvious structure of considerable scientific and economic interest is the computer. The ability to manufacture a computer from atomically precise logic elements of molecular size, and to position those logic elements into a three- dimensional volume with a highly precise and intricate interconnection pattern would have revolutionary consequences for the computer industry.

A large atomically precise structure, however, can be viewed as simply a collection of small atomically precise objects which are then linked together. To build a truly broad range of large atomically precise objects requires the ability to create highly specific positionally controlled bonds. A variety of highly flexible synthetic techniques have been considered in [32]. We shall describe two such methods here to give the reader a feeling for the kind of methods that will eventually be feasible.

We assume that positional control is available and that all reactions take place in a hard vacuum. The use of a hard vacuum allows highly reactive intermediate structures to be used, e.g., a variety of radicals with one or more dangling bonds. Because the intermediates are in a vacuum, and because their position is controlled (as opposed to solutions, where the position and orientation of a molecule are largely random), such radicals will not react with the wrong thing for the very simple reason that they will not come into contact with the wrong thing.

Normal solution-based chemistry offers a smaller range of controlled synthetic possibilities. For example, highly reactive compounds in solution will promptly react with the solution. In addition, because positional control is not provided, compounds randomly collide with other compounds. Any reactive compound will collide randomly and react randomly with anything available. Solution-based chemistry requires extremely careful selection of compounds that are reactive enough to participate in the desired reaction, but sufficiently non-reactive that they do not accidentally participate in an undesired side reaction. Synthesis under these conditions is somewhat like placing the parts of a radio into a box, shaking, and pulling out an assembled radio. The ability of chemists to synthesize what they want under these conditions is amazing.

Much of current solution-based chemical synthesis is devoted to preventing unwanted reactions. With assembler-based synthesis, such prevention is a virtually free by-product of positional control.

To illustrate positional synthesis in vacuum somewhat more concretely, let us suppose we wish to bond two compounds, A and B. As a first step, we could utilize positional control to selectively abstract a specific hydrogen atom from compound A. To do this, we would employ a radical that had two spatially distinct regions: one region would have a high affinity for hydrogen while the other region could be built into a larger “tip” structure that would be subject to positional control. A simple example would be the 1-propynyl radical, which consists of three co-linear carbon atoms and three hydrogen atoms bonded to the sp3 carbon at the “base” end. The radical carbon at the radical end is triply bonded to the middle carbon, which in turn is singly bonded to the base carbon. In a real abstraction tool, the base carbon would be bonded to other carbon atoms in a larger diamondoid structure which provides positional control, and the tip might be further stabilized by a surrounding “collar” of unreactive atoms attached near the base that would prevent lateral motions of the reactive tip.

The affinity of this structure for hydrogen is quite high. Propyne (the same structure but with a hydrogen atom bonded to the “radical” carbon) has a hydrogen-carbon bond dissociation energy in the vicinity of 132 kilocalories per mole. As a consequence, a hydrogen atom will prefer being bonded to the 1-propynyl hydrogen abstraction tool in preference to being bonded to almost any other structure. By positioning the hydrogen abstraction tool over a specific hydrogen atom on compound A, we can perform a site specific hydrogen abstraction reaction. This requires positional accuracy of roughly a bond length (to prevent abstraction of an adjacent hydrogen). Quantum chemical analysis of this reaction by Musgrave et. al.[41] show that the activation energy for this reaction is low, and that for the abstraction of hydrogen from the hydrogenated diamond (111) surface (modeled by isobutane) the barrier is very likely zero.

Having once abstracted a specific hydrogen atom from compound A, we can repeat the process for compound B. We can now join compound A to compound B by positioning the two compounds so that the two dangling bonds are adjacent to each other, and allowing them to bond.

This illustrates a reaction using a single radical. With positional control, we could also use two radicals simultaneously to achieve a specific objective. Suppose, for example, that two atoms A1 and A2 which are part of some larger molecule are bonded to each other. If we were to position the two radicals X1 and X2 adjacent to A1 and A2, respectively, then a bonding structure of much lower free energy would be one in which the A1-A2 bond was broken, and two new bonds A1-X1 and A2-X2 were formed. Because this reaction involves breaking one bond and making two bonds (i.e., the reaction product is not a radical and is chemically stable) the exact nature of the radicals is not critical. Breaking one bond to form two bonds is a favored reaction for a wide range of cases. Thus, the positional control of two radicals can be used to break any of a wide range of bonds.

A range of other reactions involving a variety of reactive intermediate compounds (carbenes are among the more interesting ones) are proposed in [32], along with the results of semi-empirical and ab initio quantum calculations and the available experimental evidence.

Another general principle that can be employed with positional synthesis is the controlled use of force. Activation energy, normally provided by thermal energy in conventional chemistry, can also be provided by mechanical means. Pressures of 1.7 megabars have been achieved experimentally in macroscopic systems[43]. At the molecular level such pressure corresponds to forces that are a large fraction of the force required to break a chemical bond. A molecular vise made of hard diamond-like material with a cavity designed with the same precision as the reactive site of an enzyme can provide activation energy by the extremely precise application of force, thus causing a highly specific reaction between two compounds.

To achieve the low activation energy needed in reactions involving radicals requires little force, allowing a wider range of reactions to be caused by simpler devices (e.g., devices that are able to generate only small force). Further analysis is provided in [32].

Feynman said: “The problems of chemistry and biology can be greatly helped if our ability to see what we are doing, and to do things on an atomic level, is ultimately developed – a development which I think cannot be avoided.” Drexler has provided the substantive analysis required before this objective can be turned into a reality. We are nearing an era when we will be able to build virtually any structure that is specified in atomic detail and which is consistent with the laws of chemistry and physics. This has substantial implications for future medical technologies and capabilities.


One consequence of the existence of assemblers is that they are cheap. Because an assembler can be programmed to build almost any structure, it can in particular be programmed to build another assembler. Thus, self reproducing assemblers should be feasible and in consequence the manufacturing costs of assemblers would be primarily the cost of the raw materials and energy required in their construction. Eventually (after amortization of possibly quite high development costs), the price of assemblers (and of the objects they build) should be no higher than the price of other complex structures made by self-replicating systems. Potatoes – which have a staggering design complexity involving tens of thousands of different genes and different proteins directed by many megabits of genetic information – cost well under a dollar per pound.


The three paths of protein design (biotechnology), biomimetic chemistry, and atomic positioning are parts of a broad bottom up strategy: working at the molecular level to increase our ability to control matter. Traditional miniaturization efforts based on microelectronics technology have reached the submicron scale; these can be characterized as the top down strategy. The bottom-up strategy, however, seems more promising. INFORMATION

More information on nanotechnology can be found in these books (all by Eric Drexler (and various co-authors)):

Engines of Creation (Anchor, 1986) ISBN: 0-385-19972-2

This book was the definition of the original charter of sci.nanotech. Popularly written, it introduces assemblers, and discusses the various social and technical implications nanotechnology might have.

Unbounding the Future (Morrow, 1991) 0-688-09124-5

Essentially an update of Engines, with a better low-level description of how nanomachines might work, and less speculation on space travel, cryonics, etc.

Nanosystems (Wiley, 1992) 0-471-57518-6

This is the technical book that grew out of Drexler’s PhD thesis. It is a real tour de force that provides a substantial theoretical background for nanotech ideas.

The Foresight Institute publishes on both technical and nontechnical issues in nanotechnology. For example, students may write for their free Briefing #1, “Studying Nanotechnology”. The Foresight Institute’s main publications are the Update newsletter and Background essay series. The Update newsletter includes both policy discussions and a technical column enabling readers to find material of interest in the recent scientific literature. These publications can be found at Foresight’s web page.

email address:

A set of papers and the archives of sci.nanotech can be had by standard anonymous FTP to /nanotech

Sci.nanotech is moderated and is intended to be of a technical nature.

–JoSH (moderator)


[Not all of these are referred to in the text, but they are of interest nevertheless.]

1. “Engines of Creation” by K. Eric Drexler, Anchor Press, 1986.

2. “Nanotechnology: wherein molecular computers control tiny circulatory submarines”, by A. K. Dewdney, Scientific American, January 1988, pages 100 to 103.

3. “Foresight Update”, a publication of the Foresight Institute, Box 61058, Palo Alto, CA 94306.

4. “There’s Plenty of Room at the Bottom” a talk by Richard Feynman (awarded the Nobel Prize in Physics in 1965) at an annual meeting of the American Physical Society given on December 29, 1959. Reprinted in “Miniaturization”, edited by H. D. Gilbert (Reinhold, New York, 1961) pages 282-296.

5. “Scanning Tunneling Microscopy and Atomic Force Microscopy: Application to Biology and Technology” by P. K. Hansma, V. B. Elings, O. Marti, and C. E. Bracker. Science, October 14 1988, page 209-216.

6. “Molecular manipulation using a tunnelling microscope,” by J. S. Foster, J. E. Frommer and P. C. Arnett. Nature, Vol. 331 28 January 1988, pages 324-326.

7. “The fundamental physical limits of computation” by Charles H. Bennet and Rolf Landauer, Scientific American Vol. 253, July 1985, pages 48-56.

8. “Molecular Engineering: An Approach to the Development of General Capabilities for Molecular Manipulation,” by K. Eric Drexler, Proceedings of the National Academy of Sciences (USA), Vol 78, pp 5275- 78, 1981.

9. “Molecular Biology of the Gene”, fourth edition, by James D. Watson, Nancy H. Hopkins, Jeffrey W. Roberts, Joan Argetsinger Steitz, and Alan M. Weiner. Benjamin Cummings, 1987. It can now be purchased as a single large volume.

10. “Tiny surgical robot being developed”, San Jose Mercury News, Feb. 18, 1989, page 26A

11. “Rod Logic and Thermal Noise in the Mechanical Nanocomputer”, by K. Eric Drexler, Proceedings of the Third International Symposium on Molecular Electronic Devices, F. Carter ed., Elsevier 1988.

12. “Submarines small enough to cruise the bloodstream”, in Business Week, March 27 1989, page 64.

13. “Conservative Logic”, by Edward Fredkin and Tommaso Toffoli, International Journal of Theoretical Physics, Vol. 21 Nos. 3/4, 1982, pages 219-253.

14. “The Tomorrow Makers”, Grant Fjermedal, MacMillan 1986.

15. “Dissipation and noise immunity in computation and communication” by Rolf Landauer, Nature, Vol. 335, October 27 1988, page 779.

16. “Notes on the History of Reversible Computation” by Charles H. Bennett, IBM Journal of Research and Development, Vol. 32, No. 1, January 1988.

17. “Classical and Quantum Limitations on Energy Consumption in Computation” by K. K. Likharev, International Journal of Theoretical Physics, Vol. 21, Nos. 3/4, 1982.

18. “Principles and Techniques of Electron Microscopy: Biological Applications,” Third edition, by M. A. Hayat, CRC Press, 1989.

19. “Machines of Inner Space” by K. Eric Drexler, 1990 Yearbook of Science and the Future, pages 160-177, published by Encyclopedia Britannica, Chicago 1989.

20. “Reversible Conveyer Computation in Array of Parametric Quantrons” by K. K. Likharev, S. V. Rylov, and V. K. Semenov, IEEE Transactions on Magnetics, Vol. 21 No. 2, March 1985, pages 947-950

21. “Theory of Self Reproducing Automata” by John Von Neumann, edited by Arthur W. Burks, University of Illinois Press, 1966.

22. “The Children of the STM” by Robert Pool, Science, Feb. 9, 1990, pages 634-636.

23. “A Small Revolution Gets Under Way,” by Robert Pool, Science, Jan. 5 1990.

24. “Advanced Automation for Space Missions”, Proceedings of the 1980 NASA/ASEE Summer Study, edited by Robert A. Freitas, Jr. and William P. Gilbreath. Available from NTIS, U.S. Department of Commerce, National Technical Information Service, Springfield, VA 22161; telephone 703-487- 4650, order no. N83-15348

25. “Positioning Single Atoms with a Scanning Tunnelling Microscope,” by D. M. Eigler and E. K. Schweizer, Nature Vol 344, April 5 1990, page 524-526.

26. “Mind Children” by Hans Moravec, Harvard University Press, 1988.

27. “Microscopy of Chemical-Potential Variations on an Atomic Scale” by C.C. Williams and H.K. Wickramasinghe, Nature, Vol 344, March 22 1990, pages 317-319.

28. “Time/Space Trade-Offs for Reversible Computation” by Charles H. Bennett, SIAM J. Computing, Vol. 18, No. 4, pages 766-776, August 1989.

29. “Fixation for Electron Microscopy” by M. A. Hayat, Academic Press, 1981.

30. “Nonexistent technology gets a hearing,” by I. Amato, Science News, Vol. 136, November 4, 1989, page 295.

31. “The Invisible Factory,” The Economist, December 9, 1989, page 91.

32. “Nanosystems: Molecular Machinery, Manufacturing and Computation,” by K. Eric Drexler, John Wiley 1992.

33. “MITI heads for inner space” by David Swinbanks, Nature, Vol 346, August 23 1990, page 688-689.

34. “Fundamentals of Physics,” Third Edition Extended, by David Halliday and Robert Resnick, Wiley 1988.

35. “General Chemistry” Second Edition, by Donald A. McQuarrie and Peter A. Rock, Freeman 1987.

36. “Charles Babbage On the Principles and Development of the Calculator and Other Seminal Writings” by Charles Babbage and others. Dover, New York, 1961.

37. “Molecular Mechanics” by U. Burkert and N. L. Allinger, American Chemical Society Monograph 177 (1982).

38. “Breaking the Diffraction Barrier: Optical Microscopy on a Nanometric Scale” by E. Betzig, J. K. Trautman, T.D. Harris, J.S. Weiner, and R.L. Kostelak, Science Vol. 251, March 22 1991, page 1468.

39. “Two Types of Mechanical Reversible Logic,” by Ralph C. Merkle, submitted to Nanotechnology.

40. “Atom by Atom, Scientists build ‘Invisible’ Machines of the Future,” Andrew Pollack, The New York Times, Science section, Tuesday November 26, 1991, page B7.

41. “Theoretical analysis of a site-specific hydrogen abstraction tool,” by Charles Musgrave, Jason Perry, Ralph C. Merkle and William A. Goddard III, in Nanotechnology, April 1992.

42. “Near-Field Optics: Microscopy, Spectroscopy, and Surface Modifications Beyond the Diffraction Limit” by Eric Betzig and Jay K. Trautman, Science, Vol. 257, July 10 1992, pages 189-195.

43. “Guinness Book of World Records,” Donald McFarlan et. al., Bantam 1989.


Will Human Beings be Squeezed Out of Existence?

Be Afraid
by Jack Beatty

Source: The Atlantic Monthly Group
All material copyright © 2000 All rights reserved.

April 6, 2000

If the digital revolution is soon to produce what Bill Joy — one of the world’s leading technologists — fears is a dystopian nightmare, the only hope for humanity may be the end of capitalism as we know it. Try selling that in an election year

“This is the first moment in the history of our planet when any species, by its own involuntary actions, has become a danger to itself — as well as to vast numbers of others.”
–Carl Sagan

In the projectable future robots will replace “biological humans” as economic actors. Unable to compete in the marketplace with their super-intelligent creations, human beings won’t be able to afford what they need to live and will “be squeezed out of existence.” That is the dystopian vision of robotics. The utopian vision is that humans will attain immortality by “downloading” themselves into the undying electronic being of robots.

Genetic engineering will give evil new life, putting the power to loose new plagues on humanity into the hands of terrorists, madmen, and despots. Genetic engineering will soon allow our descendants to end hunger, to create myriad new species with myriad scientific and economic possibilities, to increase our life-span, and to improve our quality of life in dimensions beyond our present means to calculate (in the future we can all be blondes!).

Nanotechnology — manufacturing at a molecular level — will create plants that will “outcompete real plants,” surrounding us with an inedible jungle and spawning “omnivorous bacteria” that, wind borne, will spread a self-replicating pollen that “could reduce the biosphere to dust in a matter of days.” Nanotechnology promises to achieve huge results through the manipulation of the infinitesimal. Molecular-level “assemblers” engineered by nanotechnology will allow a grateful humanity to cure cancer, to abandon the use of fossil fuels before they render the planet uninhabitable (replacing them with cost-effective, environmentally salubrious solar power), and to compact all knowledge into a wristwatch.

These clashing vistas of the possible are taken from a horizon-widening 20,000 word article in the April issue of Wired by Bill Joy, the “Chief Scientist” and cofounder of Sun Microsystems. The Chief Scientist is afraid of science. The twenty-first-century technologies of genetics, nanotechnology, and robotics (GNR) could give us “knowledge-enabled mass destruction” to supplement existing arsenals of twentieth-century triumphs of mass destruction like nuclear bombs and germ warfare. Not since Jonathan Schell’s 1982 vision of nuclear devastation, The Fate of the Earth, has one seen a preview of apocalypse to compare with this passage:

I think it is no exaggeration to say that we are on the cusp of the further perfection of extreme evil, an evil whose possibility spreads well beyond that which weapons of mass destruction bequeathed to the nation-states, on to a surprising and terrible empowerment of extreme individuals. Humankind’s only method of escape from a future so terrible as to drive us off the planet is “relinquishment: to limit development of the technologies that are too dangerous, by limiting our pursuit of certain kinds of knowledge” — a dread step contemplated by scientists Joy respects.

Relinquishment will require a degree of monitoring and verification that only a government with near-dictatorial powers could enforce — and remember, this is the good news. Government has no monopoly on GNR technologies, as it does on NBC — nuclear, biological, and chemical — weapons of mass destruction. The GNR revolution is being led by the private sector, meaning that, to achieve relinquishment, private businesses will have to significantly evolve into semi-public entities under the total scrutiny of thought-police. This is incompatible with capitalism as we know it. In history’s bitterest irony, the “knowledge economy” will be the end of Western man’s Faustian drive to know at any cost.

To ponder questions of this heft in an election year, when no candidate in his right mind will ask any member of the electorate to sacrifice anything, is to despair of democracy. If two dollars a gallon for gasoline has the voters in an uproar, imagine their reaction to Joy’s thought police! Politics is more and more about disguising problems than about bringing important issues to the fore. The politics of candor can’t seem to get traction against the politics of escape. Increasingly, candidates who identify problems so as to make issues of them — Bill Bradley making universal health insurance the justification of his candidacy; John McCain running against systemic corruption — get penalized for ruffling the national complacency. Politics needs to be made safe for bad news before issues like global warming, much less the relinquishment of dangerous technology, can get a serious hearing.

Joy thinks our “great capacity of caring” for the things we love about existence will somehow see us through the end of capitalism and of knowledge as we know it, but this closing flourish of optimism is at dramatic variance with the inventory of horribles he parades in an article comfortlessly titled, “Why the future doesn’t need us.”

Jack Beatty is a senior editor at The Atlantic Monthly and the author of The World According to Peter Drucker (1997) and The Rascal King: The Life and Times of James Michael Curley (1992).

Anthony King
29 March 2022
Nature Website

Illustration: Jan Kallwejt

Tiny machines that deliver

therapeutic payloads to

precise locations in the body

are the stuff of science fiction.

But some researchers are trying to

turn them into a clinical reality…

Cancer drugs usually take a scattergun approach.


Chemotherapies inevitably hit healthy bystander cells while blasting tumors, sparking a slew of side effects. It is also a big ask for an anticancer drug to find and destroy an entire tumor – some are difficult to reach, or hard to penetrate once located.

A long-dreamed-of alternative is to inject a battalion of tiny robots into a person with cancer.


These miniature machines could navigate directly to a tumor and smartly deploy a therapeutic payload right where it is needed.

“It is very difficult for drugs to penetrate through biological barriers, such as the blood-brain barrier or mucus of the gut, but a microrobot can do that,” says Wei Gao, a medical engineer at the California Institute of Technology in Pasadena.

Among his inspirations is the 1966 film Fantastic Voyage, in which a miniaturized submarine goes on a mission to remove a blood clot in a scientist’s brain, piloted through the bloodstream by a similarly shrunken crew.


Although most of the film remains firmly in the realm of science fiction, progress on miniature medical machines in the past ten years has seen experiments move into animals for the first time.

There are now numerous micrometer- and nanometer-scale robots that can propel themselves through biological media, such as the matrix between cells and the contents of the gastrointestinal tract.


Some are moved and steered by outside forces, such as magnetic fields and ultrasound.


Others are driven by onboard chemical engines, and some are even built on top of bacteria and human cells to take advantage of those cells’ inbuilt ability to get around.


Whatever the source of propulsion, it is hoped that these tiny robots will be able to deliver therapies to places that a drug alone might not be able to reach, such as into the centre of solid tumors.


However, even as those working on medical nano- and microrobots begin to collaborate more closely with clinicians, it is clear that the technology still has a long way to go on its fantastic journey towards the clinic.

In the 1966 film Fantastic Voyage,

a miniaturized medical team goes on a mission

 to remove a blood clot in a scientist’s brain.

Contributor: Collection Christophel/Alamy Stock Photo

Poetry in motion

One of the key challenges for a robot operating inside the human body is getting around. In Fantastic Voyage, the crew uses blood vessels to move through the body.


However, it is here that reality must immediately diverge from fiction.

“I love the movie,” says roboticist Bradley Nelson, gesturing to a copy of it in his office at the Swiss Federal Institute of Technology (ETH) Zurich in Switzerland.

“But the physics are terrible.”

Tiny robots would have severe difficulty swimming against the flow of blood, he says. Instead, they will initially be administered locally, then move towards their targets over short distances.

When it comes to design, size matters.

“Propulsion through biological media becomes a lot easier as you get smaller, as below a micron bots slip between the network of macromolecules,” says Peer Fischer, a robotics researcher at the Max Planck Institute for Intelligent Systems in Stuttgart, Germany.

Bots are therefore typically no more than 1-2 micrometers across.


However, most do not fall below 300 nanometers. Beyond that size, it becomes more challenging to detect and track them in biological media, as well as more difficult to generate sufficient force to move them.

Scientists have several choices for how to get their bots moving. Some opt to provide power externally.


For instance, in 2009, Fischer – who was working at Harvard University in Cambridge, Massachusetts, at the time, alongside fellow nanoroboticist Ambarish Ghosh – devised a glass propeller, just 1-2 micrometers in length, that could be rotated by a magnetic field. 1


This allowed the structure to move through water, and by adjusting the magnetic field, it could be steered with micrometer precision.


In a 2018 study, 2 Fischer launched a swarm of micropropellers into a pig’s eye in vitro, and had them travel over centimeter distances through the gel-like vitreous humor into the retina – a rare demonstration of propulsion through real tissue.


The swarm was able to slip through the network of biopolymers within the vitreous humor thanks in part to a silicone oil and fluorocarbon coating applied to each propeller.


Inspired by the slippery surface that the carnivorous pitcher plant Nepenthes uses to catch insects, this minimized interactions between the micropropellers and biopolymers.

An electron microscope image

of a glass nanopropeller.

Credit: Conny Miksch, MPI-IS

Another way to provide propulsion from outside the body is to use ultrasound.


One group placed magnetic cores inside the membranes of red blood cells, 3 which also carried photo-reactive compounds and oxygen.


The cells’ distinctive biconcave shape and greater density than other blood components allowed them to be propelled using ultrasonic energy, with an external magnetic field acting on the metallic core to provide steering.


Once the bots are in position, light can excite the photosensitive compound, which transfers energy to the oxygen and generates reactive oxygen species to damage cancer cells.

This hijacking of cells is proving to have therapeutic merits in other research projects.


Some of the most promising strategies aimed at treating solid tumors involve human cells and other single-celled organisms jazzed up with synthetic parts.


In Germany, a group led by Oliver Schmidt, a nano-scientist at Chemnitz University of Technology, has designed a biohybrid robot based on sperm cells. 4


These are some of the fastest motile cells, capable of hitting speeds of 5 millimeters per minute, Schmidt says. The hope is that these powerful swimmers can be harnessed to deliver drugs to tumors in the female reproductive tract, guided by magnetic fields.


Already, it has been shown that they can be magnetically guided to a model tumor in a dish.

“We could load anticancer drugs efficiently into the head of the sperm, into the DNA,” says Schmidt. “Then the sperm can fuse with other cells when it pushes against them.”

At the Chinese University of Hong Kong, meanwhile, nanoroboticist Li Zhang led the creation of microswimmers from Spirulina microalgae cloaked in the mineral magnetite.


The team then tracked a swarm of them inside rodent stomachs using magnetic resonance imaging. 5


The biohybrids were shown to selectively target cancer cells. They also gradually degrade, reducing unwanted toxicity.

Another way to get micro- and nanobots moving is to fit them with a chemical engine:

a catalyst drives a chemical reaction, creating a gradient on one side of the machine to generate propulsion.

Samuel Sánchez, a chemist at the Institute for Bioengineering of Catalonia in Barcelona, Spain, is developing nanomotors driven by chemical reactions for use in treating bladder cancer.


Some early devices relied on hydrogen peroxide as a fuel.


Its breakdown, promoted by platinum, generated water and oxygen gas bubbles for propulsion. But hydrogen peroxide is toxic to cells even in minuscule amounts, so Sánchez has transitioned towards safer materials.


His latest nanomotors are made up of honeycombed silica nanoparticles, tiny gold particles and the enzyme urease. 6


These 300-400-nm bots are driven forwards by the chemical breakdown of urea in the bladder into carbon dioxide and ammonia, and have been tested in the bladders of mice.

“We can now move them and see them inside a living system,” says Sánchez.



Breaking through

A standard treatment for bladder cancer is surgery, followed by immunotherapy in the form of an infusion of a weakened strain of
Mycobacterium bovis bacteria into the bladder, to prevent recurrence.


The bacterium activates the person’s immune system, and is also the basis of the BCG vaccine for tuberculosis.

“The clinicians tell us that this is one of the few things that has not changed over the past 60 years,” says Sánchez.

There is a need to improve on BCG in oncology, according to his collaborator, urologic oncologist Antoni Vilaseca at the Hospital Clinic of Barcelona.


Current treatments reduce recurrences and progression,

“but we have not improved survival”, Vilaseca says. “Our patients are still dying.”

The nanobot approach that Sánchez is trying promises precision delivery.


He plans to insert his bots into the bladder (or intravenously), to motor towards the cancer with their cargo of therapeutic agents to target cancer cells, using abundant urea as a fuel.


He might use a magnetic field for guidance, if needed, but a more straightforward replacement of BCG with bots that do not require external control, perhaps using an antibody to bind a tumor marker, would please clinicians most.

“If we can deliver our treatment to the tumor cells only, then we can reduce side effects and increase activity,” says Vilaseca.

An optical microscopy video

showing a swarm of urease-powered nanomotors

swimming in urea solution.

Credit: Samuel Sánchez Ordóñez

Not all cancers can be reached by swimming through liquid, however. Natural physiological barriers can block efficient drug delivery.


The gut wall, for example, allows absorption of nutrients into the bloodstream, and offers an avenue for getting therapies into bodies.

“The gastrointestinal tract is the gateway to our body,” says Joseph Wang, a nanoengineer at the University of California, San Diego.

However, a combination of cells, microbes and mucus stops many particles from accessing the rest of the body.


To deliver some therapies, simply being in the intestine isn’t enough – they also need to be able to burrow through its defenses to reach the bloodstream, and a nanomachine could help with this.

In 2015, Wang and his colleagues, including Gao, reported the first self-propelled robot in vivo, inside a mouse stomach. 7


Their zinc-based nanomotor dissolved in the harsh stomach acids, producing hydrogen bubbles that rocketed the robot forwards.


In the lower gastrointestinal tract, they instead use magnesium.

“Magnesium reacts with water to give a hydrogen bubble,” says Wang.

In either case, the metal micromotors are encapsulated in a coating that dissolves at the right location, freeing the micromotor to propel the bot into the mucous wall.

Some bacteria have already worked out their own ways to sneak through the gut wall. Helicobacter pylori, which causes inflammation in the stomach, excretes urease enzymes to generate ammonia and liquefy the thick mucous that lines the stomach wall.


Fischer envisages future micro- and nanorobots borrowing this approach to deliver drugs through the gut.

Solid tumors are another difficult place to deliver a drug. As these malignancies develop, a ravenous hunger for oxygen promotes an outside surface covered with blood vessels, while an oxygen-deprived core builds up within.


Low oxygen levels force cells deep inside to switch to anaerobic metabolism and churn out lactic acid, creating acidic conditions.


As the oxygen gradient builds, the tumor becomes increasingly difficult to penetrate. Nanoparticle drugs lack a force with which to muscle through a tumor’s fortifications, and typically less than 2% of them will make it inside8.


Proponents of nanomachines think that they can do better.

Sylvain Martel, a nanoroboticist at Montreal Polytechnic in Canada, is trying to break into solid tumors using bacteria that naturally contain a chain of magnetic iron-oxide nanocrystals.


In nature, these Magnetococcus species seek regions that have low oxygen.


Martel has engineered such a bacterium to target active cancer cells deep inside tumors. 8.

“We guide them with a magnetic field towards the tumor,” explains Martel, taking advantage of the magnetic crystals that the bacteria typically use like a compass for orientation.

The precise locations of low-oxygen regions are uncertain even with imaging, but once these bacteria reach the right location, their autonomous capability kicks in and they motor towards low-oxygen regions.


In a mouse, more than half the bacteria injected close to tumor grafts broke into this tumor region, each laden with dozens of drug-loaded liposomes.


Martel cautions, however, that there is still some way to go before the technology is proven safe and effective for treating people with cancer.

In the Netherlands, chemist Daniela Wilson at Radboud University in Nijmegen and colleagues have developed enzyme-driven nanomotors powered by DNA that might similarly be able to autonomously home in on tumor cells. 9


The motors navigate towards areas that are richer in DNA, such as tumor cells that undergoing apoptosis.

“We want to create systems that are able to sense gradients by different endogenous fuels in the body,” Wilson says, suggesting that the higher levels of lactic acid or glucose typically found in tumors could also be used for targeting.

Once in place, the autonomous bots seem to be picked up by cells more easily than passive particles are – perhaps because the bots push against cells.

Nanoroboticist Sylvain Martel (middle)

discusses a new computer interface

with two members of his team.

Credit: Caroline Perron

Fiction versus reality

Inspirational though
Fantastic Voyage might have been for many working in the field of medical nanorobotics, there are some who think the film has become a burden.

“People think of this as science fiction, which excites people, but on the other hand they don’t take it so seriously,” says Martel.

Fischer is similarly jaded by movie-inspired hype.

“People sometimes write very liberally as if nanobots for cancer treatment are almost here,” he says. “But this is not even in clinical trials right now.”

Nonetheless, advances in the past ten years have raised expectations of what is possible with current technology.

“There’s nothing more fun than building a machine and watching it move. It’s a blast,” says Nelson.

But having something wiggling under a microscope no longer has the same draw, without medical context.

“You start thinking, ‘how could this benefit society?'” he says.

With this in mind, many researchers creating nanorobots for medical purposes are working more closely with clinicians than ever before.

“You find a lot of young doctors who are really interested in what the new technologies can do,” Nelson says.

Neurologist Philipp Gruber, who works with stroke patients at Aarau Cantonal Hospital in Switzerland, began a collaboration with Nelson two years ago after contacting ETH Zurich.


The pair share an ambition to use steerable microbots to dissolve clots in people’s brains after ischaemic stroke – either mechanically, or by delivering a drug.

“Brad knows everything about engineering,” says Gruber, “but we can advise about the problems we face in the clinic and the limitations of current treatment options.”

Sánchez tells a similar story:

while he began talking to physicians around a decade ago, their interest has warmed considerably since his experiments in animals began three to four years ago.

“We are still in the lab, but at least we are working with human cells and human organoids, which is a step forward,” says his collaborator Vilaseca.

As these seedlings of clinical collaborations take root, it is likely that oncology applications will be the earliest movers – particularly those that resemble current treatments, such as infusing microbots instead of BCG into cancerous bladders.


But even these therapeutic uses are probably at least 7-10 years away. In the nearer term, there might be simpler tasks that nanobots can be used to accomplish, according to those who follow the field closely.

For example, Martin Pumera, a nanoroboticist at the University of Chemistry and Technology in Prague, is interested in improving dental care by landing nanobots beneath titanium tooth implants. 10


The tiny gap between the metal implants and gum tissue is an ideal niche for bacterial biofilms to form, triggering infection and inflammation.


When this happens, the implant must often be removed, the area cleaned, and a new implant installed – an expensive and painful procedure.


He is collaborating with dental surgeon Karel Klíma at Charles University in Prague.

Another problem the two are tackling is oral bacteria gaining access to tissue during surgery of the jaws and face.

“A biofilm can establish very quickly, and that can mean removing titanium plates and screws after surgery, even before a fracture heals,” says Klíma.

A titanium oxide robot could be administered to implants using a syringe, then activated chemically or with light to generate active oxygen species to kill the bacteria.


Examples a few micrometers in length have so far been constructed, but much smaller bots – only a few hundred nanometers in length – are the ultimate aim.

Clearly, this is a long way from parachuting bots into hard-to-reach tumors deep inside a person.


But the rising tide of in vivo experiments and the increasing involvement of clinicians suggests that microrobots might just be leaving port on their long journey towards the clinic.








Future Changes

Nanotechnology – What Changes Can We Expect?

Source: NanoTechnology Magazine

A Chapter Contribution from Bill Spence, NTM for French Science Writer Daniel Ichbiah and Pascal Rosier’s Next Book: Interviews About the 21st Century

What objects we commonly know should disappear because of nanotechnology?

People living before and through the transition – at first and because of prejudice for things we know and because people have not imagined the variety and super rich realm of new possibilities — the objects failure to everyday life will be sought by the public and reproduced by assembler technology. People will still want cotton beach towels, although the cotton farmer will no longer be needed when fibers can be manufactured atom by atom from carbon in the air or from limestone. Lots of familiar items will appear “traditional” on the outside, yet posses a multitude of new tricks and functionality made possible with MNT — cars with Utility Fog crash protection for instance. Of course, MNT Smart Materials can look like anything, yet perform “magic”.

Now, the next generation and generations to follow, born into the age of nanotechnology will a “clean slate” without concrete historical prejudices, will design objects and lifestyles that take advantage of the new wealth of possibilities and I should expect design objects and “environments” that would appear bizarrely alien, extraordinarily novel to even the most advanced nano tinker today. The general concept is familiar in science fiction, only now we have a clear engineering path to make real, the stunning constructs of uninhibited imaginations and those yet to be born.

The wild card to consider and the reason that frankly, it is ludicrous to project past a few decades — or more than say, one generation or so, is the effect nanotechnology will have on intelligence enhancement efforts. Once these efforts are even mildly successful, the the “experimenters” will spend much of their time amplifying intelligence enhancement efforts and the valve controlling what is imaginable and what can be engineered opens at a geometric rate. By definition, what can and will be is unimaginable now, and I’m not even addressing the issue of machine intelligence in the equation. The curve approaches vertical.

What new objects should appear because of nanotechnology?

Perhaps the big story — with mature nanotechnology, any object can morph into any other imaginable object… truly a concept requiring personal exposure to fully understand the significance and possibilities, but to get a grip on the idea, consider this:

The age of digital matter — multi-purpose, programmable machines, change the software, and something completely different happens.

A simple can opener or a complex asphalt paver are both, single purpose machines. Ask them to clean your floor or build a radio tower and they “stare” back blankly. A computer is different, it is a multi purpose machine — one machine that can do unlimited tasks by changing software… but only in the world of bits and information.

I’m involved with a company developing Fractal Shape Shifting Robots. Fractal Robots are programmable machines that can do unlimited tasks in the physical world, the world of matter. Load the right software and the same “machines” can take out the garbage, paint your car, or construct an office building and later, wash that building’s windows. In large groups, these devices exhibit what may be termed as macro (hold in your hand) sized “nanobots “, possessing AND performing many of the desirable features of mature nanomachines (as described in Drexler’s, Engines of Creation, Unbounding the Future, Nanosystems, etc.). This is the beginning of “Digital Matter”.

These Robots look like “Rubic’s Cubes” that can “slide” over each other on command, changing and moving in any overall shape desired for a particular task. These cubes communicate with each other and share power through simple internal induction coils, have batteries, a small computer and various kinds of internal magnetic and electric inductive motors (depending on size) used to move over other cubes (details here). When sufficiently miniaturized (below 0.1mm) and fabricated using photolithography methods, cubes can also be programmed to assemble other cubes of smaller or larger size. This “self-assembly” is an important feature that will drop cost dramatically.

The point is — if you have enough of the cubes of small enough dimension, they can slide over each other, or “morph” into any object with just about any function, one can imagine and program for such behavior. Cubes of sufficiently miniaturized size could be programed to behave like the “T-2” Terminator Robot in the Arnold Schwartznegger movie, or a lawn chair… Just about any animate or inanimate object.

Fractal Shape Shifting Robots have been in prototype for the last two years and I rather expect this form of “digital matter” to hit the commercial seen very soon. In the near future, if you gaze out your window and see something vaguely resembling an amoeba constructing an office building, you’ll know what “IT” is.

This is not to say individual purpose objects will not be desirable… Back to cotton — although Cubes could mimic the exact appearance of a fuzzy down comforter (a blanket), if made out of cubes, it would be heavy and not have the same thermal properties. Although through a heroic engineering effort, such a “blanket” could be made to insulate and pipe gasses like a comforter and even “levitate” slightly to mimic the weight and mass, why bother when the real thing can be manufactured atom by atom, on site, at about a meter a second (depending on thermal considerations).

Also, “single purpose” components of larger machines will be built to take advantage of fantastic structural properties of diamondoid-Buckytube composites for such things as thin, super strong aircraft parts. Today, using the theoretical properties of such materials, we can design an efficient, quiet, super safe personal vertical takeoff airocar. This vehicle of science fiction is probably science future.

Which industries should disappear because of nano-technology?

Everything — but software, everything will run on software, and general engineering, as it relates to this new power over matter… and the entertainment industry. Unfortunately, there will still be insurance salesmen and lawyers, although not in my solar orbiting city state. If as Drexler suggest, we can pave streets with self assembling solar cells, I would tend to avoid energy stocks. Mature nanites could mine any material from the earth, landfills or asteroids at very low cost and in great abundance. The mineral business is about to change. Traditional manufacturing will not be able to compete with assembler technology and what happens to all those jobs and the financial markets is a big, big issue that needs to be addressed now. I intend to start or expand organizations addressing these issues and cover progress in the pages of NanoTechnology Magazine.

We will have a lot of obsolete mental baggage and programming to throw out of our heads… Traditional pursuits of money will need to be reevaluated when a personal assembler can manufacture a fleet of Porche, that run circles around todays models. As Drexler so intuitively points out, the best thing to do, is to get the whole world’s society educated and understanding what will and can happen with this technology. This will help people make the transition and keep mental, and financial meltdowns to a minimum.

Which new industries should appear because of nanotechnology?

Future generations are laughing as they read these words… Laughing at the utter inadequacy and closed imagination of this writing… So consider this a comically inadequate list. However, if they are laughing, I am satisfied and at peace, as this means we made it through the transition (although I fear it shall not be the last).

Mega engineering for space habitation and transport in the Solar System will have a serious future. People will be surprised at how fast space develops, because right now, a very bright core of nano-space enthusiasts have engineering plans, awaiting the arrival of the molecular assembler. People like Forrest Bishop have wonderful plans for space transport and development, capable of being implemented in surprisingly short time frames. This is artificial life, programmed to “grow” faster than natural systems. I think Mars will be teraformed in less time than it takes to build a nuclear power plant in the later half of the good old, backward 20th century.

An explosion in the arts and service industries are to be expected when no fields need to be plowed for our daily bread, similar to the explosion when agriculture became mechanized and efficient and the sons and daughters of farmers migrated to cities. This explosion will be exponentially greater. Leisure time, much more leisure time, more diversions… · What professions should disappear because of nano-technology ?

Ditch digger, tugboat captain — most professions where humans are now used as “smart brawn”, or as “the best available computer”, including jet fighter pilot, truck driver, surgeon, pyramid builder, steel worker, gold miner… not that there will not be people doing these jobs, just for fun. Charming libation venders have a good future, until the A.I. people make some really scary breakthroughs (grin). I do expect “the best available computer” to be important for novel situations for quite a while… and we are just on the verge for finding out how frequent and varied novel situations can be.

I have a friend who has reading and math impairments and is thus — “poorly” educated, yet a brilliant self taught mechanic. Molecular machines are just small machines. With the right visualization tools (VR with tactile feedback), I think my friend could become a competent molecular machine designer and trouble shooter. We all have our talents to contribute. Perhaps this may be the greatest opportunity in history to express talents.


Tiny Silicon Devices Rapidly Measure, Count and Sort Biological Molecules

Nanobiotechnology Sorts DNA
by David Brand –
Bill Steele –

Source: Cornell University

February 15, 2001

Cornell researchers replace test tube with tiny silicon devices to rapidly measure, count and sort biological molecules

SAN FRANCISCO — Up to now, most biologists have studied the molecules of life in test tubes, watching how large numbers of them behave.

But now researchers at Cornell University in Ithaca, N.Y., are using nanotechnology to build microscopic silicon devices with features comparable in size to DNA, proteins and other biological molecules — to count molecules, analyze them, separate them, perhaps even work with them one at a time. The work is called “nanofluidics.”

“This will expand the methods for analyzing very small amounts of biochemicals, and create new abilities unanticipated by the test-tube methods,” says Harold Craighead, Cornell professor of applied and engineering physics and director of the Cornell Nanobiotechnology Center (NBTC).

Craighead will describe some of his laboratory’s work in a talk, “Separation and Analysis of DNA in Nanofluidic Systems,” at the annual meeting of the American Association for the Advancement of Science (AAAS) at the Hilton San Francisco today (Feb. 15, 2 p.m.). The talk is part of a two-day seminar on nanotechnology.

Craighead’s work began with a quest for an “artificial gel” that would replace the organic gels used to separate fragments of DNA for analysis. Traditionally this has been done by a process called gel electrophoresis. Enzymes are used to chop DNA strands into many short pieces of varying length. The sample is placed at one end of a column of organic gel and an electric field is applied to force the DNA to move through the gel. As they slowly snake their way through the tiny pores of the material, DNA fragments of different lengths move at different speeds and eventually collect in a series of bands as a ladder-like structure that can be photographed using fluorescent or radioactive tags. The resulting image, Craighead explains, is just a list of the lengths of the fragments, from which scientists can read out genetic information. So he looked for other ways to sort DNA fragments by length that would allow scientists to work rapidly with small amounts of material. Craighead and his colleagues manufactured a variety of silicon-based nanostructures with pores comparable to the size of a large DNA molecule.

They have explored three approaches to DNA separation:

* Devices with openings that are less than the “radius of gyration” of the DNA fragments — When suspended in water, a DNA chain quickly coils into a roughly spherical blob. When pressed against a barrier with openings smaller than the radius of this form it must uncoil to pass through. Paradoxically, larger molecules do this more quickly, because they press a broader area against the obstacle, offering more places where a bit of the chain can be drawn in to start the uncoiling. When an electric field is used to drive a mix of DNA fragments along a channel with several such barriers, fragments of different lengths will move at different speeds, arriving at the far end in a series of bands not unlike those seen in gel electrophoresis.

* Sorting by physical length — A DNA sample is placed just outside a “forest” of tiny pillars arranged in a square grid, and an electric field applied to force the molecules to move into the grid. (Imagine pulling a coiled watch spring into a long, straight alley.) If the electric field is turned on just briefly and turned off before the molecule gets all the way in, the uncoiled portion will snap back out, just as the watch spring will pull back into its coil. But once the entire molecule is inside the grid, there is nothing to pull it back out. By varying the length of the electric field pulse, the researchers can control the length of the DNA strands that are collected in the grid. In addition to providing a way to measure strand length, Craighead says, this tool could be used to separate DNA for other work. If two molecules of different length are present at the start, the shorter molecules could be moved into the grid, leaving a pure sample of the longer strands outside.

* Lateral diffusion by length — When moving through a grid of tiny pillars, DNA chains are constantly buffeted by moving water molecules that can knock them off-track, a process called “Brownian motion.” If the pillars are flat vanes, all angled in the same direction, movement of all the chains will be skewed to one particular side. Shorter, lighter molecules will be pushed farther, so molecules can be sorted or measured based on the distance they are moved across the track when they emerge from the grid. Craighead calls these devices “Brownian ratchets.” These techniques all work with molecules en masse, but Craighead’s group is also studying ways to work with single molecules, or at least to work with molecules one at a time. They have built microscopic tunnels just large enough for DNA molecules to run through in single file. Nanofabricated light pipes are placed on either side of the tunnel. Although very large, DNA molecules are still too small to be seen directly by visible light, but they can be tagged with other molecules that fluoresce when exposed to an ultraviolet laser, and the fluorescence can be detected, with larger molecules giving off longer pulses of light. In addition to counting the number of molecules of a given size in a sample, these devices could incorporate switches that could shunt molecules of different sizes into different channels, Craighead says.

While most of the work up to now has been with DNA, Craighead says, these methods could also be applied to the study of other large organic molecules, including proteins, carbohydrates and lipids.