CCD and CMOS image sensor technologies were invented in the late 1960s and early 1970s. At the time, CMOS performance was limited by available lithography technology, allowing CCDs to dominate for the next 25 years. The original argument a decade ago for the renewal of CMOS image sensors as a competitor to CCD technology was generally based on several ideas:
- Lithography and process control in CMOS fabrication had reached levels that soon would allow CMOS sensor image quality to rival that of CCDs.
- Integration of companion functions on the same die as the image sensor, creating camera-on-a-chip or system-on-a-chip capabilities.
- Lowered power consumption.
- Reduced imaging system size, as a result of integration and reduced power consumption.
- The ability to use the same CMOS production lines as mainstream logic and memory device fabrication, delivering economies of scale for CMOS imager manufacturing.
Other conventional arguments favouring CMOS included operation with a single power supply. A great deal has changed with CMOS and CCD technology. Some projections turned out to be true. Others have changed with an evolving technology landscape. Today there is a vibrant industry for both types of sensors. Structural changes in the technology and business environment mean that a new framework now exists for considering the relative strengths and opportunities of CMOS and CCD sensor technology.
CCDs move photogenerated charge from pixel to pixel and convert it to the voltage at an output node. CMOS imagers convert charge to voltage inside each pixel.
Straight Path for CCDs
CCD technology has undergone incremental advances in device design, materials and fabrication technology. CCD sensors have steadily increased in quantum efficiency, decreased in dark current and pixel size, reduced operating voltages (power dissipation) and improved signal handling. And their companion circuits have become more integrated, making CCDs easier to use and allowing faster time to market. CCDs now yield better performance with less power.
The Winding Road for CMOS
Compared with CCDs, the recent progress of CMOS technology has been more rapid, yet more turbulent. Arguably, the journey toward better performance in CMOS sensors began with improving the fill factor. The desire for performance and flexibility in pixel architecture competes with the amount of space in each radiation-sensing pixel because CMOS sensors generally require a number of optically insensitive transistors in each pixel. The pursuit of a greater fill factor and the related ability to produce smaller pixels has improved the minimum feature size of 0.5 um and is larger than a decade ago. CMOS sensors have gone from fabrication process technology of 0.35, 0.25 um and 0.18 um to 90 nm in the most advanced devices and, in a growing number of cases, even smaller.
Advancing lithography technology to improve fill factor and optical sensitivity increased the opportunity for digital integration on the chip because smaller transistors decrease both power dissipation and the die size that is needed for integrated circuit functions.
However, CMOS technology’s dependence on advancing lithography came at a price. Progressively denser lithography increased development costs. And, although smaller transistor sizes facilitate digital integration, integration often increases design complexity faster than design productivity. Substantial on-chip digital integration can bring with it noise coupling issues, with switching transients introducing noise into analogue signal pathways and even into some digital ones. Noise coupling of digital integration can conflict with the pursuit of “sensor quality”. Design complexity, design cycle duration and noise have often meant that digital integration generally has not been able to take full advantage of the lithographic trajectory of CMOS image sensors.
A more significant and unavoidable challenge of deep submicron sensor design in CMOS sensors is the analogue portion of the integrated circuit. As microelectronics fabrication technology becomes denser, analogue circuit performance typically suffers. For 0.25-um technology and smaller, supply voltages drop from 5-V levels, introducing constraints on the dynamic range at the signal levels relevant to most image sensors. Below 0.35 um, the linearity of transistor performance also tends to diminish.
Declining linearity and dynamic range combine to erode the accuracy of analogue circuitry. Other analogue performance complications, such as leakage current and complementary circuit matching issues, can arise with increasingly dense fabrication technologies. Fighting the decline of analogue performance in deep sub-micron CMOS required a significant sensor and circuit design shift. However, because there were few relevant precedents for such high-performance digitally assisted circuit design from other applications, it has taken a number of years to develop digitally-assisted analogue architectures that fully embrace all of the competing forces among design, electro-optical performance and fabrication of CMOS image sensors.
CMOS vs. CCD Fabrication Process
The fabrication process is a defining aspect of CMOS sensor performance and has evolved considerably. From an initial notion of reusing or lightly adapting standard logic or memory processes, there has been an iterative journey to optimized CMOS pixel sensor processes.
These process technologies have often become complex in terms of the number of mask layers and process steps to meet all competing requirements. The movement of CMOS image sensors away from standard memory or logic fabrication processes started with changes to silicides and dielectrics to improve optical compatibility. Further changes have been made to:
- Reduce the optical stack height and improve its structure, thus enhancing quantum efficiency, off-axis image quality and colour fidelity.
- Introduce pixel implants and deep depletion regions to control photodiode and Si-SiO2 interface performance, influencing leakage (dark) current and image lag.
- Simultaneously manage analogue and digital transistor properties as well as interconnects.
Process optimization at each lithography node typically requires experimentation and tweaking with real reticles and silicon, not just within a simulation environment. The appreciable cost of process optimization in CMOS image sensor fabrication has shifted the advantage to manufacturers with captive foundries. Some “fab-less” players have been successful, but far more success stories have been fab-based.
It has been easier for companies with fabs to customize the fabrication process because they have been able to maintain the attention of foundry process engineers. There will continue to be viable roles for both fab-based and fab-less business models in CMOS sensor development and production. However, the original notion of easy migration of production from one CMOS fab to another has given way to a far more cohesive and adapted relationship with a particular foundry, similar to that seen in the CCD industry.
CMOS vs. CCD Technology
To reach the levels of performance needed for a variety of high-volume applications, CMOS sensor pixel design and fabrication technology now more closely resembles that of CCDs than many people had predicted. Integration and power dissipation are decisive advantages of CMOS technology, whereas CCDs retain a greater ability for cost-effective adaptation and performance. Contrary to the initial outlook, processed wafer costs have turned out to be less of an automatic advantage for CMOS.
Wafer size, economies of scale and foundry-specific cost models, however, can be bigger factors favouring one technology over the other. Regardless of wafer size, the necessity of moving to deeper submicron technology for CMOS, for fill factor and other reasons, has delivered process control and cleanliness during fabrication (compared with less advanced fabrication processes) that can improve yield, particularly for large-die-area sensors. CCD technology is not as lithography-dependent for its performance as CMOS technology.
In general, achieving application-specific performance differentiation costs less with CCD technology than with CMOS, both in sensor design and the fabrication process. CMOS has made good on its promise of integration, low power dissipation and single-voltage-supply capabilities, and intensive iterative process engineering and device design have led to high image quality. The production cost per unit of processed silicon does not strongly favour one technology over the other (as originally thought).
The extensive process engineering and number of fabrication steps to bring CMOS image quality to levels comparable with CCDs required much more expensive wafer processing than was originally projected. Cost is often more strongly influenced by the business economics and competitive motivations of a particular foundry, rather than by the choice of technology itself.
There tend to be sharp differences in the wafer sizes used to manufacture CMOS and CCD image sensors, and the size depends on whether a manufacturer is fab-based or fabless and whether it is adapting a depreciated logic or memory production facility. There are more often third-party foundries available for 200-mm wafer production of CMOS image sensors, whereas CCD foundry production is frequently on 150-mm wafer lines. Captive production of CMOS and CCD is done on 150-, 200- and 300-mm lines.
A larger wafer size reduces the labour cost per unit area of silicon processed. Thus, the availability of larger wafer sizes for CCD or CMOS can be a strong factor in the overall economics of production. The cost of manufacturing one or the other also depends on the type of wafer processing available and whether downstream sensor production volumes will carry the up-front development costs.
CMOS imagers can be fabricated with more “camera” functionality on the chip. This offers advantages in size and convenience.
|Initial Prediction for CMOS
|Outcome CMOS vs. CCD
|Legacy logic and memory production lines are commonly used for CMOS image production today, but with highly adapted processes akin to CCD fabrication
|Required much greater process adaptation and deeper submicron lithography than initially thought
|High performance is available in both technologies today but with higher development cost in most CMOS than CCD technologies
|On-chip circuit integration
|Longer development cycles, increased cost, trade-offs with noise, flexibility during operation
|Greater integration in CMOS than CCD, but companion ICs still often required with both
|Economies of scale from using mainstream logic and memory foundries
|Extensive process development and optimization required
|Legacy logic and memory production lines are commonly used for CMOS imagerproduction today, but with highly adapted processes akin to CCD fabrication
|Reduced power consumption
|Steady progress for CCDs diminished the margin of improvement for CMOS
|CMOS ahead of CCDs
|Reduced imaging subsystem size
|Optics, companion chips and packaging are often the dominant factors in imaging subsystem size
- D. Passeri et al., Characterization of CMOS Active Pixel Sensors for particle detection: beam test of the four sensors RAPS03 stacked system, Nucl. Instr. and Meth. A 617 (2010) 573–575
- D.Passeri,et al. Tilted CMOS Active Pixel Sensors for Particle Track Reconstruction, IEEE Nucl. Sci. Symp. Conf. Rec. NSS09 (2009) 1678. July 2006.
- L. Servoli et al. . Use of a standard CMOS imager as position detector for charged particles , Nucl. Instr. and Meth. A 215 (2011) 228-231, 10.1016/j.nuclphysbps.2011.04.016
- D. Biagetti et al. Beam test results for the RAPS03 non-epitaxial CMOS active pixel sensor, Nucl. Instr and Meth A 628 (2011) 230–233