Smaller and better things to come
The American Physical Society’s annual meeting in December 1959 at California Institute of Technology seemed much like any other. Few in the audience the night Richard Feynman gave the annual address could have appreciated its future significance. Some six years later Feynman would win the Nobel Prize for his pioneering work in quantum electrodynamics, but that evening he chose to speak about a new field in which “little has been done, but in which an enormous amount can be done in principle.”
Arguing that “the principles of physics do not speak against the possibility of maneuvering things atom by atom,” Feynman exhorted his audience to consider what could be achieved “if we could arrange atoms one by one the way we want them.” Instead of carving components out of bulk materials, engineers and scientists could use atoms, the basic building blocks of matter, to construct all manner of things: submicroscopic computer parts, tiny surgical devices programmed to identify and repair damaged heart valves, a 24-volume Encyclopaedia Britannica that could be printed on the head of a pin….
Such applications may have sounded like the stuff of science fiction to many in his audience, but Feynman believed their development lay in the relatively near future. Advances in miniaturization, he pointed out, had already produced electric motors the size of a fingernail. Learning how to build with atoms, despite all the technical and conceptual challenges, seemed to be the logical end point. Even in the interim, Feynman reckoned, smaller and better things lay ahead. He was right.
By 1974, the microtechnology revolution, perhaps best represented by the ever-shrinking silicon chip, was in full swing. That year Norio Taniguchi, a researcher at the Tokyo Science University, coined the term nanotechnology to describe the machining of matter on extremely small scales, including the nanometer scale. (A nanometer is one billionth of a meter, or about 10 atomic diameters.)
In short order, researchers began to use this term to describe a wide range of small-scale engineering. Today the latest advances in small-scale engineering, millimetersized microelectromechanical systems (MEMS), pervade the world. MEMS integrate electromechanical microsensors and microactuators with advanced microelectronics. Basically, they are tiny three-dimensional machines with moving parts. Blood pressure kits, carbon-monoxide detectors, and airbags all contain MEM sensors. And thanks to sophisticated ultraviolet lithography techniques, engineers can now pack more than four million transistors onto a single dynamic random access memory chip.
These advances represent the fruits of the traditional, top-down approach to miniaturization. First you master the art of building small things. Then you learn to construct smaller and smaller things, gradually reducing their size. Despite the advances made during the past 25 years using this approach, we still have a long way yet to go to reach the nano level.
Some researchers, however, believe there’s another way to get there.
The Bottom-Up Approach
In 1981 K. Eric Drexler, a 26-year-old Ph.D. student at the Massachusetts Institute of Technology, published a paper in the Proceedings of the National Academy of Sciences USA entitled “Molecular Engineering: An Approach to the Development of General Capabilities for Molecular Manipulation.” This paper hypothesized that if we learned how to design protein molecules, we could use them to construct molecular machines. Generations of these machines could then synthesize three-dimensional structures to atomic specifications. Feynman’s vision of arranging “atom by atom,” working from the bottom-up rather than the top-down, would be within reach.
That same year Gerd Binnig and Heinrich Rohrer at IBM’s Zurich Research Laboratory invented the tool that could help make Drexler’s vision a reality: the scanning tunneling microscope (STM).
An STM has a sharp, servo-controlled metal tip usually constructed of either tungsten, nickel, or gold. The tip carries electrical current. Applying a small amount of voltage to the tip when it’s near an object’s surface creates an effect physicists call tunneling. Electrons tunnel or jump across the gap between the tip and the object. By measuring the number of electrons that jump across the gap, researchers can generate an image of an object’s atomic surface. By increasing the voltage running through the STM’s tip, they can move atoms, calling on the forces of electrical repulsion or attraction.
The STM’s invention fueled the beginnings of serious work on the bottom-up approach. In 1986, after earning the first doctorate ever awarded in molecular nanotechnology from MIT, Drexler published Engines of Creation. The book contained his formal redefinition of nanotechnology as “the knowledge and means for designing, fabricating, and employing molecular scale devices by the manipula tion and placement of individual atoms and molecules with precision on the atomic scale.” Engines also outlined the technological innovations possible with mature nanotechnology: super-strong structural materials, miniature programmable medical robots capable of patrolling the bloodstream, and vastly enhanced computing technology. Inspired by this vision, Ralph Merkle at Xerox Palo Alto Research Center in California, the coinventor of public key encryption (a system for changing data into a form that can be read only by the intended receiver), and other like-minded research scientists began working in earnest on the theoretical and technical problems of how to build a molecular assembler.
Building a Molecular Assembler
STMs offered a means to tackle the first step in constructing a molecular assembler: moving the device’s atomic parts into place. Don Eigler and colleagues at IBM’s Almaden Laboratory demonstrated this capability in 1990, spelling out IBM’s logo with 35 xenon atoms on a nickel surface. To perform the feat, however, they had to operate in a total vacuum and cool the nickel to 40 above absolute zero. Clearly, getting from this stage to a fully functioning assembler would hardly be a skip and jump.
“Engineering a device [assembler] that grips molecules so that they can be rotated [into various positions] is far more difficult,” explains Robert Birge, director of Syracuse University’s W.M. Keck Center for Molecular Electronics. Thermal noise poses one problem. Atoms and molecules are in continuous motion; the higher the temperature, the more lively that motion. To maintain its position and structure amidst this noise, an assembler would have to be composed of a particularly stiff material, such as diamond. Chemists have long recognized the carbon-carbon bond to be one of nature’s strongest because each carbon atom can bond to four neighboring atoms. In diamond a network of these bonds creates the world’s stiffest material.
Another, perhaps more exotic, candidate for the job is the carbon nanotube. A carbon nanotube forms when a sheet of carbon atoms linked together in a honeycomb-like pattern rolls itself together. If all the hexagons in the lattice align along the tube’s axis, it will conduct electricity just like metal. If, however, the tube rolls at a twisted angle, it acts like a semiconductor. These electrical properties have made carbon nanotubes one of the hottest potential alternatives to silicon chips. Also, the tubes’ extreme strength coupled with the ability to string long chains of them together to form highly durable nanoropes, makes them attractive building materials.
Whatever carbon-based material it’s comprised of, an assembler must be capable of depositing atoms at precise locations. To do so, it must be able to control the chemical reactivity of the surface on which it is working. A raw diamond surface, for example, would be highly reactive because of all the carbon atoms’ unused bonds. Covering this surface with a layer of hydrogen atoms through a process called hydrogenation renders it inert. For the assembler to add a carbon atom to this surface, a single hydrogen atom must be selectively removed to create a reactive site with a dangling bond. Merkle believes this process of hydrogen abstraction can be achieved in several ways.
Acetylene radicals (two carbon atoms triple bonded together) offer one means. An acetylene radical has one dangling bond where hydrogen would normally be in ordinary acetylene. When moved across the diamond surface, the radical could conceivably pick up hydrogen atoms, leaving behind in its wake reactive spots where the assembler could deposit carbon atoms. To work, the whole procedure would need to occur in an inert environment, such as a vacuum.
It all sounds like a difficult proposition, but at least one engineer, Jim von Ehr, believes it can be done. In April 1997 von Ehr founded Zyvex Corporation in Richardson, Texas, a developmental engineering firm dedicated to building a molecular assembler, or what von Ehr describes as a “nanomanufacturing plant.”
The Zyvex assembler will not resemble the molecular scale devices Drexler originally envisioned, but it will achieve the same goal. The assembler will most likely perform tasks by manipulating chemical reactions, says von Ehr. Thanks to advances in scanning tunneling microscopy, the Zyvex team has two other tools at its disposal capable of imaging and pushing atoms around: atomic force and scanning probe microscopes.
An atomic force microscope (AFM), also invented by IBM scientists, has a tip equipped with a cantilever mechanism. This mechanism allows the tip to move up and down. By measuring the up-and-down motion of the AFM’s tip as it is dragged across an object, researchers can generate a map of the object’s molecular surface and move atoms through electrical repulsion. Scanning probe microscopes (SPM) operate in a tapping mode. Their tips can actually push atoms into place.
One of the major obstacles to using STMs, AFMs, and SPMs in constructing assembler-like devices has been the amount of time involved in the procedure. Because all three tools move in steps that are less than one atomic diameter, they are very slow. A recent breakthrough by a team of Cornell University doctoral students and researchers, however, could speed up the process considerably. The team, headed by electrical engineering professor Noel MacDonald, works out of Cornell’s Nanofabrication Facility, a member of the National Science Foundation’s National Nanofabrication Users Network. A 10-year partnership cofunded by NSF’s Biological Sciences, Engineering, and Mathematical and Physical Sciences Divisions, the network offers researchers around the country access to state-of-the-art ion beam lithography, prototyping, and scanning equipment. Other network members include Stanford University, Howard University, Pennsylvania State University, and the University of California at Santa Barbara.
After eight years of research, the Cornell team has produced a microelectrical mechanical scanning tunneling microscope (MEM STM), a miniature STM. A conventional STM uses piezoelectric (solid-state ceramic) motors to drive its tip; the Cornell MEM STM’s drive is roughly the diameter of a strand of human hair. The MEM STM also can operate up to 10,000 times faster than an STM, conceivably “scanning at the order of a thousand to a million cycles per second,” MacDonald says. If placed on a chip by the thousands in parallel fashion, a collection of MEM STMs could allow engineers and scientists “to move around in a microsecond things that previously took minutes.”
Breakthroughs like the Cornell MEM STM aside, the construction of a working molecular assembler is by no means certain. Most work in this area remains limited to computer simulation. Given the technical and theoretical challenges of creating such a structure and then programming it, some researchers remain justifiably skeptical that an assembler will ever be built. In a recent Newsweek article Richard Smalley, director of Rice University’s Center for Nanoscale Science and Technology, iterated his deep concern “that a universal assembler is flat-out impossible.”
Yet academe and industry appear to agree that nanotechnology will dominate the next century. Several major U.S. companies, including Xerox, Eli Lily, Autodesk, Texas Instruments, Dow Chemical, DuPont, IBM, and AT&T, support nanotechnology research at universities and their own facilities. Eleven U.S. government agencies, including NASA, the Defense Advanced Research Projects Agency, and NSF, invest a combined total of roughly $115 million annually in related physics and chemistry research. Internationally, China, the European Union, Korea, Taiwan, and the United Kingdom have initiated major research and development programs. Japan alone has invested more than $220 million in two 10-year R&D programs.
The investments appear to be paying off. Nanotechnology’s near-term applications include enhanced semiconductors, sophisticated biosensors, and improved medical devices.
For the past 30 years, the computer industry’s ability to exponentially increase computers’ speed and memory has appeared limitless. In truth, however, these dramatic leaps forward can be attributed to the miniaturization of the computer’s basic component: the transistor.
Moore’s Law, postulated in 1965 by Intel founder Gordon Moore, predicts a doubling of the number of transistors per chip nearly every 18 months. Extrapolating from this law, transistors will reach the 50-nanometer level shortly after the turn of the century. Yet at this size, they will be literally too small to function. As Mark Law, professor of electrical and computer engineering and codirector of the Software and Analysis of Advanced Materials Processing (SWAMP) Center at the University of Florida, explains, “the smaller the device, the fewer atoms it contains.” At a certain point, Law says, transistors “eventually will contain too few atoms to work” because of a number of quantum mechanical effects. For example, when the barrier that prevents electrical current from flowing when a transistor is off shrinks below a certain size, electrons will be able to tunnel through that barrier, thereby disrupting the transistor’s operations.
Law and his colleague Kevin Jones, professor of materials science and engineering and codirector of the SWAMP center, project that the transistor layer at the heart of the Pentium processor will soon be only 50 atoms thick. Once it’s less than 10 atoms thick, “it will shrink itself out of function.” Law and Jones believe this could happen in a little more than a decade.
Nanoscale computer technologies offer the means of circumventing the material limitations of current transistor design. As discussed previously, researchers are actively exploring carbon nanotubes’ potential. Last fall physics professor Alex Zettl and colleagues at the University of California at Berkeley completed experiments that showed a single carbon nanotube could actually hold many miniature electrical components. A computer based on carbon nanotube components would not only be small, it would be fast and powerful.
Chemical computers offer yet another alternative to present silicon-wafer technology. These computers would process information by making and breaking chemical bonds. Information would be stored in the chemical structures resulting from those operations. In 1994 Leonard Adleman, a computer science professor at the University of Southern California (USC), unveiled a functional DNA computer capable of solving a “traveling salesman” problem. Adleman instructed the computer to find the most efficient route between seven cities. DNA strands represented each city and all paths between them. The computer sorted through all these strands to determine the correct solution. Adleman’s invention not only operated 100 times quicker than a fast supercomputer, it demonstrated greater energy and memory efficiency.
Currently, Adleman and USC molecular biologist Myron Goodman are working to apply DNA computing power to the National Security Agency’s Digital Encryption Standard. Adleman and Goodman believe this problem, which lies beyond the scope of most modern supercomputers, ideally suits DNA computing because of its enormous parallel-computing capacity.
In 1995 Wired magazine asked five of nanotechnology’s top researchers to name the first products likely to emerge from their work. Biosensors ranked at the top of Richard Smalley’s list. These pinpoint-size devices could perform a number of sensing tasks, such as measuring blood-sugar levels or recognizing biological agents in other media.
Last June scientists at Australia’s Cooperative Research Centre for Molecular Engineering and Technology announced that they had produced the world’s first functioning nanoscale biosensor. The sensor’s primary component is an ion channel, 1.5 billionths of a meter in size. This channel functions like an electrical switch between a two-layered lipid membrane. When the channel is open, ion particles can flow through both layers. Closing the channel cuts off the particle flow. Its creators claim that the biosensor could detect an increase in the sugar content of Sydney Harbor after a single sugar cube was tossed in that water. Hyperbole or not, tests have shown that the sensor works equally well in a variety of media, including water and human blood. The potential applications are enormous, spanning bioremediation to medicine.
The idea of a miniature surgical robot, programmed to repair damaged organs or locate and destroy diseased cells, has been one of the most enduring futurist visions associated with nanotechnology. While researchers are years away from building and programming such complex devices, biosensors and a variety of other nanoscale technologies show promise of improving patient care and treatment within the next few decades.
At the University of Washington (UW), Viola Vogel, associate professor of bioengineering and codirector of the university’s Center for Nanotechnology, and Jonathon Howard, professor of neurobiology, recently concluded a series of experiments in which they successfully controlled the path that a molecule follows along a nanoengineered synthetic surface. Vogel describes this work as the beginnings of engineering a “chemical shuttle transport system.” A working shuttle system could revolutionize drug delivery systems.
Medications released from pills today peak at a maximum concentration within a certain time frame in the patient’s bloodstream. The concentration then gradually declines until the patient ingests the next dose. Physicians have long known, however, that medications are effective when a constant dosage is maintained, such as through an intravenous drip. A nanoscale chemical shuttle system could make this possible without having to be hooked up to a machine.
While Vogel and Howard push the limits of drug delivery systems, Buddy Ratner, professor of bioengineering and chemical engineering and director of UW’s Engineered Biomaterials Center, focuses on developing “a new generation of implanted medical devices made of biomaterials that are recognized by cells in the body.” This work is part of an 11-year project, funded by an initial five-year NSF grant worth $12.4 million. The first step, Ratner and his colleagues believe, lies in synthesizing nanoscale coatings that can be applied to implants to make them more biointeractive.
Each year the human body’s natural defenses render millions of medical implants useless. When injured, the body reacts with an inflammatory response, summoning extra red blood cells to the site of the wound to promote quick tissue regeneration. Medical implants trigger this same response. When the arriving red blood cells recognize the implant as foreign material, however, the immune system kicks into gear. The implant is quickly coated with a layer of proteins that identify it as foreign. Those proteins begin communicating with surrounding cells, triggering them to begin walling the implant off from the rest of the body with scar tissue. Hence, vascular grafts used to bypass clogged arteries almost always become clogged with scar tissue, and catheters (by far the most common medical implant) frequently trigger infection and must be removed.
Implants coated with thin films of biointeractive substances, however, could pass as organic, thereby tricking the body into beginning the normal healing process. Thomas Horbett, a UW bioengineering and chemical engineering professor, has been experimenting with using low-temperature glow plasma deposition technologies to form thin coatings of peptides around implants to prevent the scar tissue encapsulation response. Among other things, the coatings should “inhibit coagulation,” making the implant more compatible with blood.
Ralph Merkle describes nanotechnology as “the cornerstone of future technology,” going so far as to say that “if we make the right decisions, we could see real results in 20 years’ time.”
In light of our current capabilities, such pronouncements seem slightly optimistic. Although the development of basic research enabling tools such as the STM, AFM, and SPM has yielded tremendous knowledge gains, engineers and scientists are just beginning to learn how to use these tools to achieve precise control of atoms. Similarly, while advanced computer simulations have contributed to great strides in computational understanding, researchers are only starting to tackle the challenges involved in translating those simulations into working realities. Ongoing fundamental research in a wide variety of areas (particularly in surface chemistry and physics) are imperative to realizing the promises of nanotechnology.
Still, the possibility, however remote, of achieving Feynman’s goal of “maneuvering atom by atom” inspires the imagination. As biologist Thomas Donaldson once said, “The future isn’t a place that we travel to. The future is something that we have to build.”
Copyright American Society for Engineering Education Sep 1998
Provided by ProQuest Information and Learning Company. All rights Reserved