Analog decoders of color video signal have been around for some time. However, they have been difficult to use and provide limited video quality. Digital video does far more compared with analog. Its most exciting aspects are the tremendous possibilities, which are denied to analog technology. By sampling the 1D analog composite signal, we can perform the decoding digitally on a pixel base, and apply many different algorithms to get better video quality. The NTSC composite video standard, defined in 1952, is 2:1 interlaced with 262.5 lines per field (525 lines per frame), 60 fields per second, and 4:3 aspect ratio. The DC level of the analog signal defines the monochrome luminance information. The Hue and saturation information of the color is transmitted using a 3.58MHz sub-carrier within the same bandwidth as the monochrome signal. I and Q (or U and V) are used to modulate the color sub-carrier in phase quadrature. Timing information such as vertical sync and horizontal sync are inserted every field or line period respectively to define the beginning of a field, beginning of a line, and blanking information. To display a composite NTSC signal on a progressive RGB display such as a computer monitor, several stages of processing need to be carried out. Below is a block diagram of digital processing system for an analog NTSC composite video signal.
Figure A framework of NTSC decoding.
The blanking period starts from line 1 of each field and includes the vertical sync (vsync) pulse, which defines the beginning of a new field. Lines 10-21 are usually used for VBI data (vertical blanking interval) such as close caption and teletex, and line 22 is the first line with active video. The active video lines continue until line 262.5 where the next field blanking information starts. This interval ends in line 525. The bottom of the sync is the smallest value in the sampled data stream and can be identified by using a threshold. The vsync pulse can be detected by looking for the specific timing information of six wide negative pulses with positive pulses (hsync wide) between them.
Figure Vertical Sync and Blanking Signal.
For one video line directly from the sampled data: The line starts from the horizontal sync (hsync), continues with the color burst information, which follows by the active video information. The hsync detction is done again by using a threshold.
Figure NTSC Line Signal Sample.
Each video line contains color burst information. The color burst information is located in the “back porch” of the video line, which starts after the hsync and ends before the active video. Then the color burst consists of 9 cycles of the sub-carrier with the specific phase that was used to modulate the color in the current video line. The decoder can extract the color burst phase and frequency of each line, and perform an accurate demodulation of the color. In NTSC, the phase of the color burst is inverted in every line, and at the beginning of every odd field. Four complete fields are required to repeat the interval of the color burst phase. To extract the color burst phase and frequency for each line, a digital PLL (Phase Lock Loop) must be implemented. For each video line, the phase in which the color bust starts (0 or 180) and the exact frequency are stored. The phase can be easily detected by looking at the first peak of the color burst.
Figure Color Burst Information.
The NTSC composite video signal is formed by superimposing the quadrature modulated chrominance signal onto the luminance signal. The U and V are modulated using different phase of the sub-carrier. For simpler Y/C separation and color demodulation, the input analog composite signal is resampled to four times the chrominance sub-carrier frequency. So that the resampling structures every pixel as Y ± U or Y ± V, which makes the separation easier.
The conventional way to perform Y/C separation is using a notch filter to extract the luminance, and a bandpass filter to extract the chrominance. Usually, the notch filter and the bandpass filter are centered at the chrominance sub-carrier with a span of ± 1.3MHz. To take advantage of the way the chrominance and the luminance were combined in the composite signal spectrum, a 1-D comb filter can be used.
Although the simple vertical comb filter provides a better degree of luminance and chrominance separation compared to traditional notch and bandpass filter approach, it performs perfectly only for vertically orientated structures in a picture. If the information in adjacent lines is different, decoding errors known as hanging dots occur. Therefore, adaptive technique must be employed to improve the picture quality. The 2-D adaptive filter combines both the vertical comb filter with adaptive weighting, and the horizontal notch filter.
3D adaptive Y/C separation further improves the luminance-chrominance separation process such that the separation is done perfectly for the stationary parts of the image. If the picture is stationary, the pixels between frames (temporal direction) are identical, except that the modulated chrominance phase is inverted (180 degrees). Check the luminance and chrominance information in the pixel domain with regards to time. By adding and subtracting pixels between frames, the luminance and chrominance can be perfectly separated from the composite signal.
However, if the area of the picture contains motion, pixels between frames are no longer identical, and a 2D adaptive Y/C separation will provide better results. Therefore, a motion detector is required to identify the areas of the picture containing motion, such that the Y/C separator can adaptively mix between temporal and spatial filters. Look at a simple block diagram of the 3D Y/C separator as below.
Figure Y/C separation framework with notch and comb filters.
The chrominance data is demodulated using sine and cosine sub-carrier data. See the diagram below, the double frequency components are then removed by low-pass filtering, resulting in the U and V signals being recovered.
Figure Chroma Demodulation.
The De-interlacer converts this interlaced 60 fields/second sequence with half resolution into a progressive 60 frames/second with full resolution. The motion information is taken from the same motion detector that was developed for the 3D Y/C separation. A block diagram of the de-interlacer is clear shown as below.
The implemented video processing was tested on several test patterns and motion sequences. Following are selected decoded video clips in progressive resolution and RGB color space (YUV to RGB color conversion is done).
Figure Result for Old TV Program.
Figure Result for Multi-Burst.
Figure Result of Color Bar.