Sensors, Image Noise, and Pixel Well Depth—and Why It All Matters for Display Measurement

Posted:  
Mon, October 14, 2019
Author: 
Shaina Warner & Anne Corning  | 

As the display industry increasingly turns to pixel-dense displays to create vivid and detailed on-screen images, there is a growing need for precise, high-resolution measurement methods to ensure quality of these displays. Particularly with emissive technologies like OLED and microLED, and microdisplays that are viewed near to the eye, a single defective pixel or sub-pixel can impact display performance and user experience.

When pixels are more densely populated across a display it compounds the challenge of evaluating quality. A visual inspection system—such as an imaging photometer or colorimeter—needs to capture enough visible detail in images to sufficiently isolate and accurately calculate the luminance and color of every single pixel on the display. Without resolution and clear images, small defects (like pixel-level defects) can be missed, false defects can be introduced, and imprecise data per pixel can impede measures such as display uniformity correction driven from the pixel level.

As the number of pixels in displays increases, imaging systems need to have more and more measurement pixels (sensor pixels) to increase the resolution of measurement images and capture the display in greater detail. While resolution demands more measurement pixels in a sensor area, the size of the imaging system’s measurement pixels is also important. Larger pixels are said to have a greater “well depth” or “full well capacity,” so a balance must be struck between resolution and pixel size. We’ll get to the importance of pixel size later in this blog post.

Resolution alone isn’t all that matters, however—all imaging systems are affected by a certain amount of image noise, or “unwanted signal”.1 Noise muddies image clarity, and “can produce undesirable effects such as artifacts, unrealistic edges, unseen lines, corners, blurred objects”2 and more. When uncontrolled, noise can negate the benefit of an imaging system with an ultra-high-resolution sensor for display measurement—ultimately a high-resolution/high-noise system just adds a greater number of inaccurate pixels to the image.

Imaging photometers and colorimeters are often used to inspect visual display quality, measuring variations in luminance and color, and spotting defects like dead pixels and mura (blemishes). The ProMetric® I-Series Imaging Colorimeter (left) and ProMetric Y-Series Imaging Photometer (right).

It’s Noisy In There!

An imaging system used for display inspection is a type of digital camera. It captures an image by mapping photons (light particles) onto a sensor inside the camera (CCD, CMOS), where they are stored in the sensor pixels. Those photons are then converted into electrons that can be read out to create a digital image. Inevitably, noise is also captured in each of the sensor pixels. Too much noise reduces the contrast and accuracy of details in a measurement image, where the artifacts of noise could be misinterpreted as meaningful variations in the display.

Grainy appearance of the photographic image at left is due to noise; at right, the same image without noise, increasing clarity. (Image License: GFDL)

Where does noise come from and why is it inevitable? It’s usually an aspect of electronic noise, which can come from many sources such as nearby electrical equipment, magnetic fields, or simply the random voltage fluctuations inherent in any electrical circuit. In an imaging device, noise can be produced by the CCD or CMOS sensor, or the circuitry of the system. “Image noise can also originate in the unavoidable shot noise of an ideal photon detector.”1

Another type of noise is read noise. Any electrical system taking analog data from a sensor and converting it to digital format will introduce noise. The amount of noise introduced depends on the speed the data is pulled from the sensor (how quickly it is read out), as well as the quality of the electrical circuit, camera design, and board manufacture. Other types of image noise include Gaussian, film grain, and quantization noise, just to name a few.

A Brief Primer on Measurement Noise

What happens if you have too much image noise? Basically, noise makes it hard to distinguish meaningful data (signal) from an image—that is, data that reflects reality, which is not an artifact of the imaging system. To detect increasingly small display defects, a higher ratio of meaningful data to imaging artifacts is necessary. The signal that a system receives must be greater than the noise that it introduces. Imaging system performance is commonly indicated by its ability to achieve a high signal-to-noise ratio, or SNR, in its images.

To demonstrate, the two plots shown below are created using data from real display test measurements (two-dimensional images of a display). The task was to detect a subtle luminance difference of a spot on a bright background. The luminance variation is given in units of sigma (σ) of the background. (Using σ as the “luminance” unit makes these plots workable to display any luminance and signal level.)

The spot shown in the upper graph (at roughly M - P on the X-axis) was very subtle compared to its background and was positively detected in approximately 40% of repeated measurements – meaning 60% of measurement instances gave false negatives. In other words, 60% of the time, a single measurement by an imaging system with a low SNR would miss the luminance variation of that pixel. A signal with the magnitude of approximately 3σ will often be equal to or lower than the surrounding background level, and thus difficult to distinguish.

The spot in the lower image was measured by averaging four individual measurements. This enhanced the spot’s luminance and lowered the background noise in units of σ. The detection rate of this spot is nearly 100%. To repeatedly detect a defect without many false positives, the defect contrast should be at least six standard deviations in terms of contrast ratio (6σ beyond the sensor noise level). The lesson to be learned is that we need enough electrons to beat the noise—pointing to the value of a sensor with a greater pixel well depth to capture more electrons.

When defect contrast is below 6σ from the background, the defect becomes easily confused with noise (top). A contrast ratio of 6σ beyond the noise level is desired for repeatable detection of defects (bottom).

Shot Noise

Shot noise is the result of random statistical fluctuations of photons as they are captured by the sensor (inherent in their nature). Regardless of the sensor type, there will always be shot noise. When collecting photons over a set amount of time, the likelihood of capturing the same number of photons every time is very low. Think of it like flipping a coin: it’s highly unlikely that you would get five “heads” for every set of ten flips. Shot noise is simply a known statistical variable of photon capture.

However, the impact of the randomness of photon shot noise is reduced when we can capture more signal—that is, when we can increase the number of photons received. This is because shot noise equals the square root of the signal. With a lower signal, noise will be greater relative to the signal, and the ratio of signal to noise (SNR) will be lower. A lower SNR means noisier images.

With a higher signal, the SNR will be higher, and the less impact noise will have in measurement images. Shot noise is another reason larger pixel well size (a larger capacity to receive electrical signals) is better. The well capacity of a sensor is the limiting factor on SNR.

The Argument for Larger Pixels

So far, we’ve seen that larger pixels (greater pixel well depth) confer the following advantages for imaging quality:

  • Higher signal-to-noise ratio (more signal compared to the amount of noise)
  • Less impacted by shot noise

Because the maximum SNR is limited by the square root of the well size, in order for a measurement system to detect a 1% difference—such as 1% variation in luminance from one display pixel to the next—more than 10,000 electrons need to be captured by the sensor pixel. If the sensor pixel well reaches capacity before it holds 10,000-electrons, then the only differences we can detect from display pixel to display pixel must be greater than 1%.

If you are concerned with measuring variations of 1% or even lower, well capacity will have a significant impact. This is especially important in displays that are intended to be viewed closer. While most users can’t see a 1% display-pixel variation in a display that is held at arm’s length or further, a 1% variation could be seen in near-eye displays like those in head-mounted systems used in virtual and augmented reality systems.

Radiant’s ProMetric® Imaging Photometers and Colorimeters balance considerations of resolution, pixel well depth, and SNR to provide the optimal measurement capability to detect sub-percent (or sub-pixel) level differences in luminance and uniformity.

A larger pixel full well capacity means that more electrons (represented by the dots in this illustration) are captured by a sensor. Because the maximum SNR is limited by the square root of the well size, more than 10,000 electrons need to be captured by a sensor pixel to reveal a 1% variation in luminance from one display pixel to the next.

Teaser: The Photon Transfer Curve

The standard method that digital camera manufacturers use to capture quantitative and verifiable performance data on an imaging system is called the Photon Transfer Curve (PTC). PTC uses knowledge of a system’s input and output signals to derive the characteristics of the system itself, enabling apples-to-apples system performance comparisons.

The PTC is a plot of the shot noise characteristics of a light source as a function of the total amount of light received, on a Log-Log graph . . . but that’s a longer discussion for a future blog post. If you want to learn about the Photon Transfer Curve, and how it encapsulates the impact of SNR and pixel well size on system performance, for now you’ll just have to check out our recent webinar, “How Imaging Sensor Properties Affect Pixel-Level Measurement of Displays”. In it, Radiant Product Manager, Shannon Roberts, and Vice President of Product Development, Jens Jensen, drill down on how different sensors respond to various noise conditions to help you select the optimal imaging system for your display metrology application.

 

CITATIONS

  1. Image Noise” on Wikipedia, retrieved 10/8/2019
  2. Boyat, A., and Joshi, B., “A Review Paper: Noise Models in Digital Image Processing”, Signal & Imaging Processing: An International Journal (SIPIJ) Vol. 6, No 2, April 2015.
  3. Resolution and Dynamic Range: How These Critical CCD Specifications Impact Imaging System Performance”, Radiant Vision Systems White Paper.
     
 
 
 
Share