What is a video processor and why it is important

HD video is a wonderful thing. Who doesn't like beautiful images and digital surround sound? Who is not willing to replace the 27 or 32-inch CRT monitor to pursue a new HDTV? It is time to say goodbye to TVs with low resolution and image speckle caused by low-quality SD signals. Being able to have HD images and life would be wonderful, isn't it?

This article refers to the address: http://


Of course, you can watch HD signals now, but how do you feel? All the noise and useless information in the image did not disappear, but it was more serious than before. Many people find it surprising that the images they see on the new HDTV are worse than those on the old TV.


Unfortunately, very few people know that most of today's TV shows are standard-definition interlaced videos. The SD image must be enlarged to display on the large screen of the new HDTV, and all defects are magnified.


Unlike traditional CRT TVs, fixed-pixel displays dominate today's home theater market, from single- and three-panel front projections to large, flat-panel direct-view displays. Behind these competing products, there are many abbreviations that distinguish technology, including LCD, DLP, LCoS (liquid crystal on silicon) and PDP (plasma display panel). All of these techniques create electronic images in different ways, but they all have the same feature: fixed-matrix imaging pixels. This fixed pixel structure determines the physical resolution of the display.


In order to convert all incoming video signals into physical resolution with a fixed pixel display, the manufacturer must integrate a video processing chip in the display. In addition to scaling the image to fit the physical resolution, this video processor has a more important role in enhancing the image and eliminating artifacts caused by video transmission.

Most interlaced video sources, including DVDs, SDTVs, and 1080i HDTVs, are interlaced images that transmit only half of the image per frame at any given time.


HD video displays now use digital technology. Instead of drawing lines of image information on the screen, these techniques use an array of pixels to form an image, all displayed at once per frame. In other words, all pixels are activated simultaneously to form a complete image, rather than forming a line of images like a CRT scan.


Even so, the video signal determines whether these devices will display interlaced or progressive, that is, information from the source can be one-half or one full-frame. In fact, digital displays ultimately require progressive signals to function properly, and if interlaced signals are received, they must be converted to progressive signals before being displayed. Therefore, all digital displays convert interlaced video signals from DVD and 1080i sources into a progressive format. This is the work of the video processor, the process is called deinterlacing. All digital displays, many DVD players, and other source devices use video processors.


If the object in the video image does not move, it is very easy to deinterlace, the two fields can be intertwined and combined to form a whole frame. However, the recording is performed in an interlaced manner, and the two source fields constituting the entire frame are not simultaneously recorded. Each frame can be recorded as an odd field from a point in time and then recorded as an even field after 1/50 or 1/60 of 1s.


Therefore, if the object in the video moves within a fraction of a second, it is sufficient to merge the fields that cause the image to be erroneous, so-called "combing" or "feathering".

Figure 1 Feathering or combing

The easiest way (non-motion adaptive)
The easiest way to avoid these artifacts is to ignore those even fields. This is called a "non-motion adaptive method." With this method, data from even fields can be completely ignored when two fields reach the processor. The video processing circuitry restores the image by reproducing or averaging the rows that are lost due to pixel "insertion" from top to bottom. If the artifacts are not combed, the image quality is compromised because half of these details and resolutions are discarded. Let's look at the current video processor, which uses only 540 lines from the 1080i source to create a screen image.


Advanced method (frame-based motion adaptation)
More advanced de-interlacing techniques include frame-based, motion adaptive algorithms. Using simple motion calculations, the video processor can determine when the entire image is not moving.


If there is no movement anywhere in the image, the processor will merge the two fields directly. With this method, a still image can have a full vertical resolution of 1080 lines, but as long as there is any motion, half of the data is discarded and the resolution is reduced to 540 lines. So while the entire static test pattern looks sharp, the video is not.


Frame-based motion adaptive techniques are now very common on standard definition processors. However, due to the complexity of the even-numbered frame-level HD motion detection calculations, it is still very rare in HD video processors.

Figure 2 Non-motion adaptive method

HQV method (pixel-based motion adaptation)
HQV processing represents the most advanced progressive scan technology available today: a true pixel-based motion adaptive approach. With HQV processing, motion can be identified at the pixel level rather than at the frame level. Although it is theoretically impossible to avoid moving pixels that are discarded during the interlaced process, the HQV processing technique is very careful to discard only the pixels that cause artifacts.


Pixel-based motion adaptive deinterlacing avoids artifacts of moving objects, preserving the full resolution of non-moving portions of the screen, even if adjacent pixels are in motion.


In order to restore the details of the field loss during motion, the HQV process uses a multi-angle filter to reconstruct some missing data at the edge of the moving object, filtering out all sawtooth. This operation is called "secondary" angular interpolation because it is done after deinterlacing and is the first stage of processing.


HQV is not the only processor that implements pixel-based motion adaptive deinterlacing. It is important to realize that all deinterlacing techniques are different. In order to truly achieve motion adaptive deinterlacing for each pixel, the video processor must perform a four-field analysis. In addition to performing two analysis in the current frame, it is also necessary to determine which of the two previous fields are moving. HQV processing uses four-field analysis to continuously analyze each pixel level, even in HD.

Field format conversion and video/frame detection movies record 24 frames per second. When the movie is played on a DVD or TV at home, the 24 frames must be converted to 60 interlaced fields. Consider the 4 frames of the movie: A, B, C, and D.

Figure 3 Frame-based motion adaptive method

Figure 4 angular interpolation


The first step is to convert these 4 frames into 8 fields. This converts 24f/s into 48 interlaced fields/s. Then, considering the rate of the NTSC standard (approximately 30f/s or 60 interlaced fields/s), it is necessary to repeat some fields, which can be achieved by adding an extra field every other frame. That is to say, both fields of the A frame are recorded (A odd number, A even number), and three three fields of the B frame are recorded (B odd number, B even number, B odd number). This period is repeated along with the C frame and the D frame. This is called a 2:3 format conversion because both fields of a frame follow the three fields of the next frame.


When the sequence is played back on a progressive scan display, the previously mentioned deinterlacing technique (non-motion adaptive and motion adaptive, etc.) can be implemented. Despite this, it is possible to reconstruct the original frame perfectly without losing any data. Unlike two fields where interlaced video is recorded in fractions of a second, these fields are recorded at the same time and on the same movie frame, and then divided into fields.


Therefore, in order to display a video signal that was originally a 24f/s movie, all video processors need to analyze the field and determine that the two fields following the three fields have a periodic alternating pattern. This recognition and reconstruction is called a 3:2 pulldown, which only exists in the worst deinterlacer.

Mixing videos and movies Sometimes, after further editing and post-processing, movies can be converted into videos, including titles, transitions, and other effects. Therefore, it is only necessary to reconstruct the full frame caused by the combing artifacts, since the image portion is preferably processed using the standard deinterlacing method, while the other portions look better by format conversion and reconstruction of the original frame.


As with the previous standard deinterlacing methods, there are many ways to cope with mixed video and movies. Some processors can choose the best method based on movie or video content. Other processors are designed to keep these artifacts from being seen and use video deinterlacing at half the cost of video.


From another perspective, all processing of HQV uses a per-pixel calculation. This means that it is possible to perform a format conversion detection strategy for pixels representing movie content with an HQV processor while performing pixel-based motion adaptive deinterlacing on the superimposed video content.

Noise-reduction random noise is an inherent problem in all recorded images, and the result is often the production of so-called image particles. Not only does the post-production editing or the final stage of video compression produce noise, but it also appears as a film grain or imaging sensor noise source.


The easiest way to reduce noise is to use a spatial filter to filter out high frequency data. With this method, only one frame is evaluated at a given time. This does eliminate noise, but it degrades image quality because there is no way to distinguish between noise and detail. This method also leads to the generation of artifacts, which cause the human skin on the image to appear to be made of plastic.


The time filter takes advantage of the fact that noise is an image random factor that changes over time. Instead of simply evaluating individual frames, it evaluates a few frames at a time. Noise can be effectively reduced by identifying the difference between the two frames and eliminating the data from the final image. If there is no object motion, this is almost a perfect noise reduction technique, saving as much detail as possible. This method has been used for many high-end products.


However, if there are moving objects in the picture, it will result in a difference from the two frames. If the moving object is not separated from the noise, the ghosting and smearing effects will occur.


HQV processing employs each pixel motion adaptive and noise adaptive time filter to avoid artifacts and artifacts associated with conventional noise filters. In order to preserve the most detail, moving pixels does not need to undergo unnecessary noise processing. In the quiescent zone, the intensity of the noise reduction is determined by each pixel, depending on the noise level of the surrounding pixels and the previous frame, helping the filter to adjust the amount of noise in the image at any given time. In the end, it will produce the smallest noise and the most natural, beautifully preserved details.

Codec Noise Reduction Digital cable, satellite or Internet video produces a second type of noise, such as You-Tube and other streaming content, which we call mosquito noise. Ordinary noise is random, and mosquito noise has a specific pattern. It is a white blurred speckle distortion at the edge of the object. "Block noise" is the third type of distortion in which the artifact horizontal or vertical lines produce the appearance of a block edge. These two types of noise are caused by high levels of video compression, as broadcasters have to put more TV channels in a given bandwidth, or increase the maximum recording capacity through a personal video recorder, or speed up download times through Internet content providers or Real-time streaming. This will become a bigger problem after adding more digital channels.


The HQV video processor is able to independently determine these three types of distortion and separate the details in the video.

Detail enhancements, also known as sharpening, are a must-have component for all digital images, both HD and SD. Unfortunately, because the sharpening algorithm has long been considered a bad solution, the process is considered to be somewhat infamous and needs to be avoided.


Since the human visual system perceives sharpness by apparent contrast, exaggerating the difference between light and dark can produce images that appear to be sharper. Unfortunately, due to the simplistic sharpening scheme of the past, the process has been accompanied by the so-called "ringing" or "halo" artifacts, in which objects are surrounded by white edges, resulting in a very rough image that does not reflect the original capture. s things. Halo is sometimes more distracting than those unmodified image softening effects. For this reason, we often recommend that users reduce the sharpening capabilities of video devices.


HQV detail enhancement techniques are very different. Using a more conservative algorithm and selectively determining the blurring area prior to processing, HQV detail enhancements can even avoid halo or ringing artifacts at the highest setting. Of course, HQV detail enhancements can also be disabled if the source has been sharpened. A key advantage of HQV detail enhancement is the ability to achieve near-HD quality SDTV images when used with high performance scalers.

Air Hockey & Rod Hockey Table

Air Hockey Table,Air Hockey,Hockey Table

Tengbo Sports Co., Ltd. , http://www.shpooltables.com

Posted on