CMOSdigitalpixelsensors
CMOSdigitalpixelsensors
SPIEDigitalLibrary.org/conference-proceedings-of-spie
Orit Skorka, Dileepan Joseph, "CMOS digital pixel sensors: technology and
applications," Proc. SPIE 9060, Nanosensors, Biosensors, and Info-Tech
Sensors and Systems 2014, 90600G (16 April 2014); doi:
10.1117/12.2044808
ABSTRACT
CMOS active pixel sensor technology, which is widely used these days for digital imaging, is based on analog
pixels. Transition to digital pixel sensors can boost signal-to-noise ratios and enhance image quality, but can
increase pixel area to dimensions that are impractical for the high-volume market of consumer electronic devices.
There are two main approaches to digital pixel design. The first uses digitization methods that largely rely
on photodetector properties and so are unique to imaging. The second is based on adaptation of a classical
analog-to-digital converter (ADC) for in-pixel data conversion. Imaging systems for medical, industrial, and
security applications are emerging lower-volume markets that can benefit from these in-pixel ADCs. With these
applications, larger pixels are typically acceptable, and imaging may be done in invisible spectral bands.
Keywords: CMOS image sensors, market trends, imaging applications, electromagnetic spectrum, pixel pitch,
digital pixel sensors, analog-to-digital converters, photodetectors.
1. INTRODUCTION
The image sensor market was traditionally dominated by charge-coupled device (CCD) technology. Ease of
on-chip integration, higher frame rate, lower power consumption, and lower manufacturing costs pushed com-
plementary metal-oxide-semiconductor (CMOS) active pixel sensor (APS) technology to catch up with CCDs.
This trend is especially prominent in the high-volume consumer electronics market. Furthermore, the difference
in image quality, which gave advantage to CCDs in early days, has substantially reduced over the years.
When using either CCD or CMOS APS technology, electronic image sensors are based on analog pixels.
With CCD sensors, data conversion is done at board level, and with CMOS APS ones, data conversion is done at
either chip or column level. Because digital data is more immune to noise, transition to digital pixels can enhance
performance on signal and noise figures of merit. In particular, digital pixels enable higher signal-to-noise-and-
distortion ratios (SNDRs), lower dark limits (DLs), and wider dynamic ranges (DRs). SNDR is directly related
to image quality, DL manifests in performance under dim lighting, and DR indicates maximal range of brightness
that can be properly captured in a single frame.
With digital pixel sensor (DPS) technology, data conversion is done at pixel level, where each pixel outputs
a digital signal. Digital pixels are larger than analog ones because they contain more circuit blocks and more
transistors per pixel. These days, the highest volume of the image sensor market is based on consumer electronics
applications that favor small pixels and high resolution arrays. Many DPS designs are currently unsuitable for
this market segment. However, there are medical, surveillance, industrial, and automotive imaging applications
that can accept large pixels and benefit from digital pixels. These are low-volume growing markets, where
imaging is sometimes done in invisible bands of the spectrum. There are many approaches to DPS design, where
specific application requirements make some preferred over others.
In this review paper, Section 2 analyzes the market of CMOS image sensors, focusing on diversification into
invisible spectral bands. Section 3 compares and contrasts various digital pixel architectures in the literature.
Main points are summarized in the conclusion section.
Please address correspondence to dil.joseph@ualberta.ca.
< 10
M < 10 M
medical systems, automotive, and transport
> 10 ts video camcoders, security, and surveillance
Mu uni
nits 10 M
> mobile audio, TV, and gaming devices
> 10 ts
0M
M uni
uni 0 0 mobile phones, notebooks, and tablets
ts > 1
~ 1B ts
uni uni
ts ~ 1B
Figure 1. Low to high volume CMOS image sensor applications, according to a report prepared by Yole Développment.
X-rays THz
100 near IR
pitch (mm)
hard soft
g-rays ultraviolet
10 S/M/L IR far IR
UVC
EUV UVB
1 UVA IR
visible
10-12 10-11 10-10 10-9 10-8 10-7 10-6 10-5 10-4 10-3
wavelength (m)
Figure 2. Variation of typical pixel pitch with imaging band. (All artwork is original.)
Table 1. Typical properties of image sensors in spectral bands used for imaging.
bond pads
Figure 3. (a) Die tiling is used in X-ray image sensors to fulfill the requirement for large-area arrays because X-ray imaging
is done without image demagnification. (b) Pixel in an uncooled IR image sensor with a microbolometer device.
2.2.3 UV imaging
Applications for ultraviolet (UV) imaging include space research, daytime corona detection,13 missile detection,
and UV microscopy. UV radiation from the sun in the range of 240 to 280 nm is completely blocked from reaching
the Earth by the ozone layer in the stratosphere. A camera that is sensitive only to this region will not see any
photons from the sun.
UV cameras based on monolithic crystalline silicon (c-Si) image sensors are available commercially. Exam-
ples include the Hamamatsu ORCA II BT 512, which uses a back-illuminated CCD sensor,14 and the Intevac
MicroVista camera, which uses a back-illuminated CMOS sensor.15
2.2.5 IR imaging
The infrared (IR) band is divided here into two regions. Near IR lies between 0.7 and 1.0 µm. With bandgap
of 1.12 eV, c-Si is sensitive to radiation in this band. IR refers to longer wavelengths, where other types of
photodetectors must be used. IR photodetectors may be categorized as either semiconductor or micro-electro-
mechanical system (MEMS) devices.
Operating principles of semiconductor photodetectors are based on solid-state physics, where free charge
carriers are generated by absorption of photons. Alloys of mercury cadmium telluride (MCT) are commonly
used for detection of IR radiation. Because photon energy in this band is on the order of thermal energy at room
temperature, semiconductor photodetectors must be cooled.
Operating principles of MEMS IR detectors, called microbolometers, are based on change in electrical prop-
erties of conductive films as a result of temperature increase with exposure to IR radiation. Microbolometers do
not require cooling, and can be directly deposited on a CMOS readout circuit array,17 as illustrated in Fig. 3(b).
IR imaging applications include medical imaging (e.g., breast thermography), night vision cameras, and
building inspection (e.g., detection of hot spots and water). With modern IR cameras, image sensors with pixel
pitch of 17 µm or higher are readily available.18
I
Geiger mode reverse bias forward bias
VBD
PD V
APD
SPAD
Figure 4. Reverse bias operation of photodiodes may be divided into three regions: PD, APD, and SPAD. The gain is 0
in the PD region, linearly proportional to V in the APD region, and “infinite” in the SPAD region.
time
(c) (d)
VPD 1 frame
VDD bright dim
Vreset
Reset control Vref
VPD VFB
- Vcomp
Iph+Idk
counter
+
Vref
pixel output Vcomp
time
Figure 5. (a) In time-to-first-spike pixels, a control circuit, triggered by a comparator, stops integration and stores the
time required for VPD to reach Vref . (b) Under brighter light, less time is required and a lower value is latched in the
memory. (c) In light-to-frequency conversion pixels, when VPD reaches Vref , a comparator increments a counter and resets
the photodiode. (d) Brighter lights lead to higher frequencies on Vcomp and higher values in the counter.
DPS arrays based on non-classical ADCs have been demonstrated with p-n junctions mainly in two of these
operating regions: PD, which requires reverse-bias voltages that are readily available from the CMOS supply
counter
line; and SPAD, which operates in Geiger mode and requires extreme reverse-bias voltages, i.e., more negative
than the breakdown voltage, VBD .
Figure 6. (a) A SPAD-based pixel with PQC has a serially-connected ballast resistor, a circuit that performs edge detection,
and a counter. (b) Waveforms of the SPAD current and voltage, indicating Geiger operation, quenching, and reset.
Vbias Vbias
In the light-to-frequency conversion approach, also called intensity-to-frequency conversion, the brightness
level is converted into frequency26 by repeatedly resetting the PD capacitance over the frame period. Fig. 5(c)
shows the schematic of a light-to-frequency conversion pixel.27 Waveforms of the pixel under bright and dim
light are shown in Fig. 5(d). Note the similarities between this pixel and the previous one.
At the beginning of a frame, the reset line is activated to charge the PD capacitance. During exposure, VPD
drops as the photocurrent progressively discharges the capacitance. When VPD drops below Vref , the comparator
generates a pulse that increments the counter and triggers a feedback circuit to recharge the capacitance. A new
integration cycle is then initiated, and the process is repeated until a fixed period elapses. At the end of the
frame period, the value that is stored in the counter is read, and the counter is reset to zero.
3.1.2 SPAD-based ADCs
With PD-based digital pixels, the detector output is an analog signal. It is converted to a digital signal via a
circuit that utilizes the PD, e.g., its reverse-biased capacitance. However, with SPAD-based digital pixels, the
detector output is a pulsed signal, where each pulse represents a detected photon, and the subsequent circuit
blocks detect each pulse and use it to increment a counter.
Because SPAD operation requires high voltage levels to accelerate electrons in high electric fields, their
structure must allow enough distance for charge acceleration and include guard rings for voltage isolation. This
results in a layout area that is substantially larger than that of a standard PD. Therefore, PDs make a better
choice for applications where small pixels and system compactness are desirable. SPADs are preferred for low-
light and time-of-flight imaging applications.
When an electron-hole pair is generated in a SPAD, either by a photon absorption or by a thermal reaction,
the free charge carriers are accelerated by the high electric field across the junction, generating additional carriers
by impact ionization.28 To allow detection of subsequent photons, the avalanche process must be quenched, which
can be achieved by lowering the SPAD voltage to a level below VBD . This can be easily done by connecting a
high impedance ballast resistor, RB , in series with the SPAD.
When the circuit is inactive, the SPAD is biased to Vbias > VBD through RB . When a photon is absorbed and
successfully triggers an avalanche, the current rises abruptly. This results in development of a high voltage drop
over RB that acts as a negative feedback to lower the voltage drop over the SPAD. In this manner, the avalanche
current quenches itself, and the edge of an avalanche pulse marks the arrival time of a detected photon.
Fig. 6(a) shows this passive quenching circuit (PQC), as it is called. A comparator to perform edge detection
and a counter are also used. Waveforms of the SPAD current and voltage are shown in Fig. 6(b). PQCs are
suitable for SPAD arrays thanks to their simplicity and small area. However, they suffer from afterpulsing and
a long reset time,29 which may be overcome by additional circuitry. Mixed passive-active quenching circuits are
commonly used in SPAD arrays because they offer better performance. They include a feedback circuit that
starts quenching the SPAD as soon as an avalanche is sensed.
Vcomp
0 1 0
time
(c) (d)
Vramp
1
VDD
Vsense
latch
Reset Shutter Code<n>
D
Vsense
Iph+Idk + Vcomp Q
Code<0>
- t0 t1
Vramp
pixel output Code<1>
t1 t2
Code<2>
t2 t3 time
Figure 7. (a) In ramp-compare ADC pixels, erroneously called single-slope integrating ADC pixels, the detector output
Vsense is compared to an external ramp Vramp . (b) When Vsense − Vramp changes sign, a ramp counter is latched in pixel
memory. (c) In MCBS ADC pixels, there is only one bit stored per pixel, which saves area. (d) Ramp comparison is done
multiple times in sequence, each time to resolve one bit of a code that identifies the ramp value.
is compared n times in a single frame to the “same” 2n -valued ramp signal. One bit is resolved (and read out)
each time, so the pixel needs to contain only one latch bit, instead of all n bits.
3.2.2 Oversampling ADCs
As shown in Table 3, ADC architectures may be divided into three groups based on conversion speed and
accuracy.54 Inside the pixel, video capture is a low-bandwidth, i.e., low-speed, application that demands high
Figure 8. A true ∆Σ ADC pixel, as shown, has both a modulator and decimator. The modulator oversamples and quantizes
the ADC input, while shaping noise to high frequencies. The decimator filters the modulator signal and down-samples it
to the Nyquist rate. In this schematic, a logarithmic sensor is shown, but linear sensors may also be used.
bit-resolution, i.e., high accuracy, for high image quality. These specifications make oversampling ADCs, such
as delta-sigma (∆Σ) ADCs, especially suitable for pixel-level data conversion.
DPS arrays have been realized with first-order ∆Σ modulators inside each pixel.36–38 Higher-order ∆Σ
modulators demonstrate better noise-shaping performance. However, they take more area and power. Although
∆Σ modulators are oversampling ADCs, they are not ∆Σ ADCs. In a ∆Σ ADC, the digital output of the
modulator is processed by a decimator, a digital circuit that performs low-pass filtering and down-sampling.
Recently, Mahmoodi et al.39 presented a design, shown in Fig. 8, that includes in-pixel decimation.
Without in-pixel decimation, the bandwidth required to read the modulator outputs of a large array of pixels
may be very high. As a result, either frame size, frame rate, or oversampling ratio has to be compromised.
Lowering the oversampling ratio reduces the noise filtering and degrades the accuracy of the ∆Σ ADC. On the
other hand, with in-pixel decimation, a large number of transistors are needed per pixel, which results in larger
pixels. While this is acceptable for invisible-band applications, further efforts to shrink the in-pixel ∆Σ ADC
are needed to apply the technology to visible-band applications competitively.
4. CONCLUSION
Transition to digital pixels can boost signal and noise figures of merit of CMOS image sensors. However, a larger
pixel area makes DPS arrays less competitive for consumer electronics applications, which dominate the image
sensor market. Electronic imaging systems for medical, automotive, industrial, and security applications form
lower-volume growing markets that can accept large pixels and benefit from DPS arrays. With many of these
systems, imaging is done in invisible bands of the spectrum, such as X-ray and IR.
DPS arrays have been demonstrated with various architectures. Some used digitization techniques that are
unique to imaging. Others adapted classical ADCs. Digital pixels not based on classical ADCs have been
demonstrated by exploiting PD and SPAD detectors. Classical Nyquist-rate ADCs have been used successfully
also, some achieving small pixels. However, according to classical ADC theory, oversampling ADCs make the
best choice for low-speed high-accuracy applications, which are the specifications of DPS arrays.
ACKNOWLEDGMENTS
The authors thank Mr. Jing Li and Dr. Mark Alexiuk for technology and application advice. They are also
grateful to NSERC, TEC Edmonton, and IMRIS for financial and in-kind support.