It represents the technology for a new age: The nanotechnology leads us to a world, which can not be registered by optical instruments, as their structures are smaller than the wavelength of the visible light. The invisible world is in front of us. Welcome to the nano cosmos.
If you as child have analysed once the membrane of an onion, parts of a flower or hair under a microscope, you definitely remember the curious and fascinating pictures, which appeared in the middle of glimmering refraction effects in the ocular. You can find the light microscope more and more in schools than in research laboratories. Although it is even useful for some research work, many scientists are anxious to enlarge their field of vision in the area beyond the microscopic world.
There a new strange world exists, which is basically not ascertainable by optical instruments, as their structures are smaller than the wavelength of the visible light. As we indicate the sizes and distances in nano meters (millionth milli meter), we name this empire of invisibility the nano cosmos. Roughly spoken the nano cosmos contains objects, which are bigger than atoms and smaller than a cell. It concerns the world of molecules, from the small (as glucose) up to the comparatively gigantic chain molecules of a living cell, for example the proteins and nucleic acids.
We should learn to think in the micro cosmos, as there are at least 3 good reasons, why we should investigate the nano world: Small is efficient (miniaturization of the circuits), small is specific (molecules are individuals), small is intelligent. Single molecules, for example a DNA-cord, can bear information. One billion of identically DNA-cords do not have more information than a single molecule – but until now we need billions of molecules to read the information. Once we can decode single molecules, than we could treat billions of different DNA-sequences in a tiny drop and herewith potentially operate a high-performance DNA-computer.
Such and similar considerations have animated scientists to develop new technologies in the smallest scale, which can safely retain and handle information and can carry out mechanical, chemical and electrical functions. This smallest possible scale, which represents an optimum for many applications and especially also for data processing, is the molecular scale, the nano cosmos.
That is also the scale, in which the cell is handling all its mechanical, chemical and information processing functions. The basic invisibility of the nano cosmos is one of the reasons, why the biochemistry has needed the whole 20th century in order to find out at least, how the living cell is works. The molecular constitution of the cell, which structural scientists have been made visible for us in the past 5 centuries by hard application of indirect methods, is a fascinating and various collection of nanometersized marvels, which are often working completely different as the machines, which are constructed by human engineers in the visible world.
That’s why Scientists, who are trying to capture the nanocosmos size often adopt suggestions from the inventory of the cell. Although the evolution is occasionally lavish and inefficient and works out existing structures in order to fulfill new functions, there are however some lessons which we can learn from the nanotechnology of the nature. This principles may appear simple, but they have been so successful in the past 3.5 billion years that we do not know a life form wh ich is able to exist without.
The discrete charm of the miniature world
As the word nanotechnology started its triumphal procession in the eighties, the ideas involved has been primarily characterized by the definitive theoretical books of Eric Drexler, which have shown the placement of single atoms as central point of the new technology. The nature itself detests single atoms. Simple chemical reactions which are interconnecting in principle only two carbon atoms (in order to create ultimately sugar or other carbon hydrates) are embedded in the cell in complicated networks of joint reactions, at which each participant has got at least 3 carbon atoms.
It is easier to handle such molecules than atoms. Its chemical reactivity can be subtly modified and the catalysts of the nature, the enzymes, can recognize them and differentiate them from other similar molecules. Today most of the scientists agree that the coming epoch of the nanotechnology will be a molecular technology and no atomic technology.
Construction with molecules
The first two steps on the natural way to complex molecular structures include the traditional chemistry. Covalent bonds between carbon atoms enable the creation of amino acids and of the polypeptide chain. But then at the convolution and at the assembling of these chains, the nature goes back to methods which have found a way to the chemistry only a few decades back. Numerous feebly correlations maintain the three dimensional structure of the protein instead of some covalent bonds. In place of creating these bonds by an external impact, the nature applies the biopolymers in that way that they contain their purpose within oneself.
Proteins fold spontaneously and even the most complicated molecular machines of the cell as the ribosome with more than 50 macromolecular elements assemble themselves and spontaneously.
Paths to the nanoworld
As we know how living cells do their operations in the nanometerscale, we should be able to accomplish something similar at least on a simple level. But how should we begin at the best?
It is possible to differentiate basically two different practices: the “top-down”method, in which, coming from the nano-world, the progressive miniaturization of existing technologies is used to push forward to the nano cosmos; and reversed the “bottom-up” method, which is constructing small molecules from complex molecular structures with the help of chemical and biochemical methods.
The top-down method is very simple, but it is getting in practice more and more difficult if you push forward to the nano cosmos. We owe this method to decades of on-going computer revolution. As the functional element in integrated circuits was getting more and more small , the out-put of a standard-chip could be duplicated every 18 months within the past two decades. This phenomenon has been first discovered by Gordon Moore, one of the cofounder of the chip-manufacturer Intel, and is therefore named the Moore law.
As the chip producers have already moved on from the scale of the visible light, they are forced to use ultraviolet radiation and have to switch one day to x-ray radiation which is much more awkward to handle, in order to create smaller structures. Consequently every new diminishment step will become a bigger challenge than the previous. However, until today the Moore law has confirmed itself and will be probably valid for some years.
Although the manufacturers of computer represent the greatest customers of products in the nanometer scale and so the most important motive power behind the unabated continuously trend to the advanced miniaturization, there are also other areas, where the same manufacturing methods could become also useful. Small mechanical and movable elements, which are produced according to the methods of the chip-fabrication, are named MEMS (micro-electromechanical systems) and will soon push forward to the nano cosmos as so-called NEMS.
Success story of MEMS
The classic success story of the MEMS sector is the one of the airbag-sensor in cars. The accelerometer on MEMS-basis has proofed oneself as smaller and better as well as much more cheaper as opposed to the conventional forerunner product. In consequence it captured the world market within a few months. A medication chip also belongs to the MEMS-products, which maybe will have a similar revolutionary impact in the future. This medication chip can be grafted at an operation in order to provide permanently the patient with the pharmaceutical substances required after an operation.
The MEMS could also cause a vigorous change in the sensors. Scientists have recently developed an “artificial nose” by means of the micromanufacturing technology, which is able to identify volatile substances in complex mixtures with a so far unequalled sensitivity as e.g. the odour substance of wine.
With regards to the sensors it is maybe possible that both methods (bottom-up, to-down) will compete against each other at the end – finally our organs of smell and taste contain similarly high sensitive molecular sensors.
On the other hand the bottom-up method orients increasingly by the nanotechnology of the nature, proceeds on small molecules and uses the module principle, weak correlations and self-assembling in order to create out of it complex systems.
There are different strategies within this method, which differentiate in the following: How much or less would you like to copy from the nature ?