History
The development of biochips has a long history, starting with early work on the underlying sensor technology. One of the first portable, chemistry-based sensors was the glass pH electrode, invented in 1922 by Hughes (Hughes, 1922). Measurement of pH was accomplished by detecting the potential difference developed across a thin glass membrane selective to the permeation of hydrogen ions; this selectivity was achieved by exchanges between H+ and SiO sites in the glass. The basic concept of using exchange sites to create permselective membranes was used to develop other ion sensors in subsequent years. For example, a K+ sensor was produced by incorporating valinomycin into a thin membrane (Schultz, 1996). Over thirty years elapsed before the first true biosensor (i.e. a sensor utilizing biological molecules) emerged. In 1956, Leland Clark published a paper on an oxygen sensing electrode (Clark, 1956_41). This device became the basis for a glucose sensor developed in 1962 by Clark and colleague Lyons which utilized glucose oxidase molecules embedded in a dialysis membrane (Clark, 1962). The enzyme functioned in the presence of glucose to decrease the amount of oxygen available to the oxygen electrode, thereby relating oxygen levels to glucose concentration. This and similar biosensors became known as enzyme electrodes, and are still in use today.
In 1953, Watson and Crick announced their discovery of the now familiar double helix structure of DNA molecules and set the stage for genetics research that continues to the present day (Nelson, 2000). The development of sequencing techniques in 1977 by Gilbert (Maxam, 1977) and Sanger (Sanger, 1977) (working separately) enabled researchers to directly read the genetic codes that provide instructions for protein synthesis. This research showed how hybridization of complementary single oligonucleotide strands could be used as a basis for DNA sensing. Two additional developments enabled the technology used in modern DNA-based biosensors. First, in 1983 Kary Mullis invented the polymerase chain reaction (PCR) technique (Nelson, 2000), a method for amplifying DNA concentrations. This discovery made possible the detection of extremely small quantities of DNA in samples. Second, in 1986 Hood and co-workers devised a method to label DNA molecules with fluorescent tags instead of radiolabels (Smith, 1986), thus enabling hybridization experiments to be observed optically.
The rapid technological advances of the biochemistry and semiconductor fields in the 1980s led to the large scale development of biochips in the 1990s. At this time, it became clear that biochips were largely a “platform” technology which consisted of several separate, yet integrated components. Figure 1 shows the make up of a typical biochip platform. The actual sensing component (or “chip”) is just one piece of a complete analysis system. Transduction must be done to translate the actual sensing event (DNA binding, oxidation/reduction, etc.) into a format understandable by a computer (voltage, light intensity, mass, etc.), which then enables additional analysis and processing to produce a final, human-readable output. The multiple technologies needed to make a successful biochip — from sensing chemistry, to microarraying, to signal processing — require a true multidisciplinary approach, making the barrier to entry steep. One of the first commercial biochips was introduced by Affymetrix. Their “GeneChip” products contain thousands of individual DNA sensors for use in sensing defects, or single nucleotide polymorphisms (SNPs), in genes such as p53 (a tumor suppressor) and BRCA1 and BRCA2 (related to breast cancer) (Cheng, 2001). The chips are produced using microlithography techniques traditionally used to fabricate integrated circuits
Today, a large variety of biochip technologies are either in development or being commercialized. Numerous advancements continue to be made in sensing research that enable new platforms to be developed for new applications. Cancer diagnosis through DNA typing is just one market opportunity. A variety of industries currently desire the ability to simultaneously screen for a wide range of chemical and biological agents, with purposes ranging from testing public water systems for disease agents to screening airline cargo for explosives. Pharmaceutical companies wish to combinatorially screen drug candidates against target enzymes. To achieve these ends, DNA, RNA, proteins, and even living cells are being employed as sensing mediators on biochips (Potera, 2008). Numerous transduction methods can be employed including surface plasmon resonance, fluorescence, and chemiluminescence. The particular sensing and transduction techniques chosen depend on factors such as price, sensitivity, and reusability.
Microarray fabrication
The microarray — the dense, two-dimensional grid of biosensors — is the critical component of a biochip platform. Typically, the sensors are deposited on a flat substrate, which may either be passive (e.g. silicon or glass) or active, the latter consisting of integrated electronics or micromechanical devices that perform or assist signal transduction. Surface chemistry is used to covalently bind the sensor molecules to the substrate medium. The fabrication of microarrays is non-trivial and is a major economic and technological hurdle that may ultimately decide the success of future biochip platforms. The primary manufacturing challenge is the process of placing each sensor at a specific position (typically on a Cartesian grid) on the substrate. Various means exist to achieve the placement, but typically robotic micro-pipetting (Schena, 1995) or micro-printing (MacBeath, 1999) systems are used to place tiny spots of sensor material on the chip surface. Because each sensor is unique, only a few spots can be placed at a time. The low-throughput nature of this process results in high manufacturing costs.
Fodor and colleagues developed a unique fabrication process (later used by Affymetrix) in which a series of microlithography steps is used to combinatorially synthesize hundreds of thousands of unique, single-stranded DNA sensors on a substrate one nucleotide at a time (Fodor, 1991; Pease, 1994). One lithography step is needed per base type; thus, a total of four steps is required per nucleotide level. Although this technique is very powerful in that many sensors can be created simultaneously, it is currently only feasible for creating short DNA strands (15–25 nucleotides). Reliability and cost factors limit the number of photolithography steps that can be done. Furthermore, light-directed combinatorial synthesis techniques are not currently possible for proteins or other sensing molecules.
As noted above, most microarrays consist of a Cartesian grid of sensors. This approach is used chiefly to map or “encode” the coordinate of each sensor to its function. Sensors in these arrays typically use a universal signalling technique (e.g. fluorescence), thus making coordinates their only identifying feature. These arrays must be made using a serial process (i.e. requiring multiple, sequential steps) to ensure that each sensor is placed at the correct position. “Random” fabrication, in which the sensors are placed at arbitrary positions on the chip, is an alternative to the serial method. The tedious and expensive positioning process is not required, enabling the use of parallelized self-assembly techniques. In this approach, large batches of identical sensors can be produced; sensors from each batch are then combined and assembled into an array. A non-coordinate based encoding scheme must be used to identify each sensor. As the figure shows, such a design was first demonstrated (and later commercialized by Illumina) using functionalized beads placed randomly in the wells of an etched fiber optic cable (Steemers, 2000; Michael, 1998) Each bead was uniquely encoded with a fluorescent signature. However, this encoding scheme is limited in the number of unique dye combinations that can be used and successfully differentiated.