PQA600A Picture Quality Analyzer Datasheet
- Contact Technical Support for alternative products
- View Tektronix Encore for used test equipment
- Check support and warranty status for these products
- Find more support information for these products
Read Online:
The PQA600A (PQA) is the latest-generation Picture Quality Analyzer built on the Emmy Award winning Tektronix PQA200/300. Based on the concepts of the human vision system, the PQA provides a suite of repeatable, objective quality measurements that closely correspond with subjective human visual assessment. These measurements provide valuable information to engineers working to optimize video compression and recovery, and maintaining a level of common carrier and distribution transmission service to clients and viewers.
Key features
- Fast, accurate, repeatable, and objective picture quality measurement (Option BAS)
- Predicts DMOS (Differential Mean Opinion Score) based on Human Vision System Model (Option BAS)
- SD/HD/3G SDI, HDMI compliant with HDCP interface 2-channel capture and 2-channel generation with Swap-channel / Side by Side / Wipe display on all video formats except 1080p 50/59/60 formats
- Real time up / down conversion at generation / capture with SDI/HDMI interface for testing the instrument with up / down conversion process.
- IP interface supporting IGMP for simultaneous generation and capture
- IP Interface with simultaneous 2-channel generation / capture with IGMP support for multicast streams (Option IP)
- Picture quality measurements can be made on a variety of HD video formats (1080p, 1080i, 720p) and SD video formats (525i or 625i) (Option BAS)
- User-configurable viewing condition and display models for reference and comparison (Option ADV)
- Attention/artifact weighted measurement (Option ADV)
- Region Of Interest (ROI) on measurement execution and review (Option BAS)
- Automatic temporal and spatial alignment (Option BAS)
- Embedded reference decoder (Option BAS)
- Easy regression testing and automation using XML scripting with "export/import" file from GUI (Option ADV)
- Multiple results view options (Option BAS)
- Preinstalled sample reference and test sequences
Applications
- CODEC design, optimization, and verification
- Conformance testing, transmission equipment, and system evaluation
- Digital video mastering
- Video compression services
- Digital consumer product development and manufacturing
Compressed video requires new test methods
The true measure of any television system is viewer satisfaction. While the quality of analog and full-bandwidth digital video can be characterized indirectly by measuring the distortions of static test signals, compressed television systems pose a far more difficult challenge. Picture quality in a compressed system can change dynamically based on a combination of data rate, picture complexity, and the encoding algorithm employed. The static nature of test signals does not provide true characterization of picture quality.
Human viewer testing has been traditionally conducted as described in ITU-R Rec. BT.500-11. A test scene with natural content and motion is displayed in a tightly controlled environment, with human viewers expressing their opinion of picture quality to create a Differential Mean Opinion Score, or DMOS. Extensive testing using this method can be refined to yield a consistent subjective rating.
However, this method of evaluating the capabilities of a compressed video system can be inefficient, taking several weeks to months to perform the experiments. This test methodology can be extremely expensive to complete, and often the results are not repeatable. Thus, subjective DMOS testing with human viewers is impractical for the CODEC design phase, and inefficient for ongoing operational quality evaluation. The PQA provides a fast, practical, repeatable, and objective measurement alternative to subjective DMOS evaluation of picture quality.
System evaluation
The PQA can be used for installation, verification, and troubleshooting of each block of the video system because it is video technology agnostic: any visible differences between video input and output from processing components in the system chain can be quantified and assessed for video quality degradation. Not only can CODEC technologies be assessed in a system, but any process that has potential for visible differences can also be assessed.
For example, digital transmission errors, format conversion (i.e. 1080i to 480p in set-top box conversions), analog transmission degradation, data errors, slow display response times, frame rate reduction (for mobile transmission and videophone teleconferencing), and more can all be evaluated.

User interface of PQA showing reference, test sequences, with difference map and statistical graph.
How it works
The PQA takes two video files as inputs: a reference video sequence and a compressed, impaired, or processed version of the reference. First, the PQA performs a spatial and temporal alignment between the two sequences, without the need for a calibration stripe embedded within the video sequence. Then the PQA analyzes the quality of the test video, using measurements based on the human vision system and attention models, and then outputs quality measurements that are highly correlated with subjective assessments.
The results include overall quality summary metrics, frame-by-frame measurement metrics, and an impairment map for each frame. The PQA also provides traditional picture quality measures such as PSNR (Peak Signal-to-Noise Ratio) as an industry benchmark impairment diagnosis tool for measuring typical video impairments and detecting artifacts.
Each reference video sequence and test clip can have different resolutions and frame rates. This capability supports a variety of repurposing applications such as format conversion, DVD authoring, IP broadcasting, and semiconductor design. The PQA can also support measurement clips with long sequence duration, allowing a video clip to be quantified for picture quality through various conversion processes.
Prediction of human vision perception
PQA measurements are developed from the human vision system model and additional algorithms have been added to improve upon the model used in the PQA200/300. This new extended technology allows legacy PQR measurements for SD while enabling predictions of subjective quality rating of video for a variety of video formats (HD, SD, CIF, etc.). It takes into consideration different display types used to view the video (for example, interlaced or progressive and CRT or LCD) and different viewing conditions (for example, room lighting and viewing distance).

Picture quality analysis system
A model of the human vision system has been developed to predict the response to light stimulus with respect to the following parameters:
- Contrast including supra-threshold
- Mean luminance
- Spatial frequency
- Temporal frequency
- Angular extent
- Temporal extent
- Surround
- Eccentricity
- Orientation
- Adaptation effects

A: Modulation sensitivity vs. temporal frequency

B: Modulation sensitivity vs. spatial frequency
This model has been calibrated, over the appropriate combinations of ranges for these parameters, with reference stimulus-response data from vision science research. As a result of this calibration, the model provides a highly accurate prediction.
The graphs above are examples of scientific data regarding human vision characteristics used to calibrate the human vision system model in the PQA. Graph (A) shows modulation sensitivity vs. temporal frequency, and graph (B) shows modulation sensitivity vs. spatial frequency. The use of over 1400 calibration points supports high-accuracy measurement results.

C: Reference picture

D: Perceptual contrast map
Picture (C) is a single frame from the reference sequence of a moving sequence, and picture (D) is the perceptual contrast map calculated by the PQA. The perceptual contrast map shows how the viewer perceives the reference sequence. The blurring on the background is caused by temporal masking due to camera panning and the black area around the jogger shows the masking effect due to the high contrast between the background and the jogger. The PQA creates the perceptual map for both reference and test sequences, then creates a perceptual difference map for use in making perceptually based, full-reference picture quality measurements.
Comparison of predicted DMOS with PSNR
In the examples, Reference (E) is a scene from one of the VClips library files. The image Test (F), has been passed through a compression system which has degraded the resultant image. In this case, the background of the jogger in Test (F) is blurred compared to the Reference image (E).

E: Reference

F: Test
A PSNR measurement is made on the PQA of the difference between the Reference and Test clip. The highlighted white areas of PSNR Map (G) shows the areas of greatest difference between the original and degraded image.
Another measurement is then made by the PQA, this time using the Predicted DMOS algorithm and the resultant Perceptual Difference Map for DMOS (H) image is shown. Whiter regions in this Perceptual Contrast Difference map indicate greater perceptual contrast differences between the reference and test images.
In creating the Perceptual Contrast Difference map, the PQA uses a human vision system model to determine the differences a viewer would perceive when watching the video.

G: PSNR map

H: Perceptual difference map for DMOS
The Predicted DMOS measurement uses the Perceptual Contrast Difference Map (H) to measure picture quality. This DMOS measurement would correctly recognize the viewers perceive the jogger as less degraded than the trees in the background. The PSNR measurement uses the difference map (G) and would incorrectly include differences that viewers do not see.
Attention model
The PQA600A Opt. BAS and Opt. ADV or PQASW Opt. ADV, also incorporate an Attention Model that predicts focus of attention. This model considers:
- Motion of objects
- Skin coloration (to identify people)
- Location
- Contrast
- Shape
- Size
- Viewer distraction due to noticeable quality artifacts

Attention map example: the jogger is highlighted
These attention parameters can be customized to give greater or less importance to each characteristic. This allows each measurement using an attention model to be user-configurable. The model is especially useful to evaluate the video process tuned to the specific application. For example, if the content is sports programming, the viewer is expected to have higher attention in limited regional areas of the scene. Highlighted areas within the attention image map will show the areas of the image drawing the eye's attention.
Artifact detection
Artifact detection reports a variety of different changes to the edges of the image:
- Loss of edges or blurring
- Addition of edges or Ringing/Mosquito noise
- Rotation of edges to vertical and horizontal or edge blockiness
- Loss of edges within an image block or DC blockiness
They work as weighting parameters for subjective and objective measurements with any combination. The results of these different measurement combinations can help to improve picture quality through the system.

Artifact detection settings
For example, artifact detection can help answer questions such as: "Will the DMOS be improved with more de-blocking filtering?" or, "Should less prefiltering be used?"
If edge-blocking weighted DMOS is much greater than blurring-weighted DMOS, the edge-blocking is the dominant artifact, and perhaps more de-blocking filtering should be considered.
In some applications, it may be known that added edges, such as ringing and mosquito noise, are more objectionable than the other artifacts. These weightings can be customized by the user and configured for the application to reflect this viewer preference, thus improving DMOS prediction.
Likewise, PSNR can be measured with these artifact weightings to determine how much of the error contributing to the PSNR measurement comes from each artifact.
The Attention Model and Artifact Detection can also be used in conjunction with any combination of picture quality measurements. This allows, for example, evaluation of how much of a particular noticeable artifact will be seen where a viewer is most likely to look.
Comprehensive picture quality analysis
The PQA provides Full Reference (FR) picture quality measurements that compare the luminance signal of reference and test videos. It also offers some No Reference (NR) measurements on the luminance signal of the test video only. Reduced Reference (RR) measurements can be made manually from differences in No Reference measurements. The suite of measurements includes:
- Critical viewing (Human vision system model-based, Full reference) picture quality
- Casual viewing (Attention weighted, Full reference, or No reference) picture quality
- Peak Signal-to-Noise ratio (PSNR, Full reference)
- Focus of attention (Applied to both Full reference and No reference measurements)
- Artifact detection (Full reference, except for DC blockiness)
- DC blockiness (Full reference and No reference)
The PQA supports these measurements through preset and user-defined combinations of display type, viewing conditions, human vision response (demographic), focus of attention, and artifact detection, in addition to the default ITU BT-500 conditions. The ability to configure measurement conditions helps CODEC designers evaluate design trade-offs as they optimize for different applications, and helps any user investigate how different viewing conditions affect picture quality measurement results. A user-defined measurement is created by modifying a preconfigured measurement or creating a new one, then saving and recalling the user-defined measurement from the Configure Measure dialog menu.

Configure measure dialog

Edit measure dialog
Easy-to-use interface
The PQA has two modes: Measurement and Review. The Measurement mode is used to execute the measurement selected in the Configure Dialog. During measurement execution, the summary data and map results are displayed on-screen and saved to the system hard disk. The Review mode is used to view previously saved summary results and maps created either with the measurement mode or XML script execution. The user can choose multiple results in this mode and compare each result side by side using the synchronous display in Tile mode. Comparing multiple results maps made with the different CODEC parameters and/or different measurement configurations enables easy investigation of the root cause of any difference.
Multiple result display
Resultant maps can be displayed synchronously with the reference and test video in a summary, six-tiled, or overlaid display.

Summary graph
In Summary display, the user can see the multiple measurement graphs with a barchart along with the reference video, test video, and difference map during video playback. The user also can select two measurement results on a graph with auto time shifting that absorbs the timing difference at the content capture tom compare two measurement results intuitively. Summary measures of standard parameters and perceptual summation metrics for each frame and overall video sequence are provided.

Graph display with time shift

Six-tiled display
In Six-tiled display, the user can display the 2 measurement results side by side. Each consists of a reference video, test video, and difference map to compare to each other.

Overlay display, reference and map
In Overlay display, the user can control the mixing ratio with the fader bar, enabling co-location of difference map, reference, and impairments in test videos.
Error logging and alarms are available to help users efficiently track down the cause of video quality problems.
All results, data, and graphs can be recalled to the display for examination.
Automatic temporal/spatial alignment
The PQA supports automatic temporal and spatial alignment, as well as manual alignment.

Auto spatial alignment execution with spatial region of interest selected
The automatic spatial alignment function can measure the cropping, scale, and shift in each dimension, even across different resolutions and aspect ratios. If extra blanking is present within the standard active region, it is measured as cropping when the automatic spatial alignment measurement is enabled.
The spatial alignment function can be used when the reference video and test video both have progressive content. In the case where the reference video and test video has content with different scanning (interlace versus progressive or vice versa), the full reference measurement may not be valid. In the case where the reference video and test video both have interlaced content, the measurement is valid when spatial alignment is not needed to be set differently from the default scale and shift.
Region of interest (ROI)
There are two types of spatial/temporal Region of Interest (ROI): Input and Output. Input ROIs are used to eliminate spatial or temporal regions from the measurement which are not of interest to the user. For example, Input Spatial ROI is used when running measurements for reference and test videos which have different aspect ratios. Input Temporal ROI, also known as temporal sync, is used to execute measurements just for selected frames and minimize the measurement execution time.

Output spatial ROI on review mode for in-depth investigation
Output ROIs can be used to review precalculated measurement results for only a subregion or temporal duration. Output Spatial ROI is instantly selected by mouse operation and gives a score for just the selected spatial area. Its an effective way to investigate a specific spatial region in the difference map for certain impairments. Output Temporal ROI is set by marker operation on the graph and allows users to get a result for just a particular scene when the video stream has multiple scenes. It also allows users to provide a result without any influence from initial transients in the human vision model. Each parameter can be embedded in a measurement for the recursive operation.
Automated testing with XML scripting
In the CODEC debugging/optimizing process, the designer may want to repeat several measurement routines as CODEC parameters are revised. Automated regression testing using XML scripting can ease the restrictions of manual operation by allowing the user to write a series of measurement sequences within an XML script. The script file can be exported from or imported to the measurement configuration menu to create and manage the script files easily. Measurement results of the script operation can be viewed by using either the PQA user interface or any spreadsheet application that can read the created .csv file format as a summary. Multiple scripts can be executed simultaneously for faster measurement results.

Script sample

Import/Export script in configure measure dialog

Result file sample
SD/HD/3G SDI, HDMI compliant interface and IP interface
An SD/HD SDI interface and IP interface enable both generation and capture of SDI video and IP video. The HDMI compliant with HDCP support allows the user to directly capture the HDCP encrypted contents from the consumer instruments such as Blu-ray player and Set Top Box without hassle. This is beneficial for comparing the performance in multiple units/models or monitoring the picture quality of end to end broadcast chain including the STB output at home.

HDMI compliant with HDCP support: comparing Blu-ray disk players

Simultaneous generation/capture: measuring the picture quality of Up-converter device
There are three modes of simultaneous generation capture operation that can be performed on all video formats except 1080p 50/59/60 formats: generation and capture, 2-channel capture, and 2-channel generation.
Simultaneous generation and capture
Simultaneous generation and capture lets the user playout the reference video clips directly from the PQA into the device under test. The test output from the device can then be simultaneously captured by the PQA. The real time up / down converter could be inserted in the video signal path at generation or capture operation to evaluate an instrument with up / down conversion process.
Simultaneous 2-channel capture
Simultaneous 2-channel capture lets the user capture two live signals to use as reference and test videos in evaluating the device under test in operation. To accommodate equipment processing delay that may be present in the system, the user can use the Delay Start function when capturing video. Using Delayed Start minimizes the number of unused overhead frames in the test file and enables faster execution of the auto temporal alignment in the measurement.

Simultaneous 2-channel capture: evaluating the performance of a set-top box
Simultaneous 2-channel generation
Simultaneous 2-channel generation capability, available only in SDI/HDMI interface selection, supports three types of subjective testing with one display. Swap-channel capability will exchange reference and test video sources in a frame to help the user to figure out the difference without moving the focus point of their eyes.

Simultaneous 2-channel generation: swapping output channels 1 and 2
Side-by-side display arranges the video output from the regions in the reference and test video lining up in a row. The Wipe display takes the left region of reference video and the right region of the test video and merges them into a single video output seamlessly.
Simultaneous 2-channel generation: side-by-side display

Simultaneous 2-channel generation: wipe display
IGMP support
In any modes, the user can select the Cross Interface configuration such as generating from SDI/HDMI and capturing from IP or vice versa. IGMP support in IP capture will make stream selection simple at multicast streaming. The compressed video file captured through IP will be converted to an uncompressed file by an internal embedded decoder.

IGMP user interface
Performance you can count on
Depend on Tektronix to provide you with performance you can count on. In addition to industry-leading service and support, this product comes backed by a one-year warranty as standard.
Supported file formats for SD/HD/3G SDI, HDMI, compliant with HDCP interface
The SD/HD SDI video option can generate SDI video from files in the following formats (8 bit unless otherwise stated):
- .yuv (UYVY, YUY2)
- .v210 (10 bit, UYVY, 3 components in 32 bits)
- .rgb (BGR24)
- .avi (uncompressed, BGR32 (discard alpha channel) / BGR24 / UYVY / YUY2 / v210)
- .vcap (created by PQA600A SDI video capture)
- .vcap10 (10 bit, created by PQA600A video capture)
Frame geometry | Format | Frame format |
---|---|---|
720 x 486 | 525i | 29.97 |
720 x 576 | 625i | 25 |
1280 x 720 | 720p | 50, 59.94, 60 |
1920 x 1080 | 1080i | 25, 29.97, 30 |
1080psF | 23.98, 24, 25, 29.97, 30 | |
1080p | 23.98, 24, 25, 29.97, 30 | |
1080p (Level A,B) | 50, 59.97, 60 |
Supported file formats for IP interface
The IP interface option can generate and capture compressed files using TS support over UDP in compliance with ISO/IEC 13818-1.
Supported file formats for up/down conversion
The following formats are supported for up / down conversion:
Input format | Output format |
---|---|
525i 29.97 | 720p 59.94, 1080i 29.97 |
625i 25 | 720p 50, 1080i 25 |
720p 50 | 625i 25, 1080i 25 |
720p 59.94 | 525i 29.97, 1080i 29.97 |
720p 60 | 1080i 30 |
1080psf 23.98 | 525i 29.97 |
1080i 25 | 625i 25, 720p 50 |
1080i 29.97 | 525i 29.97, 720p 59.94 |
1080i 30 | 720p 60 |
Supported file formats for measurement
All formats support 8 bit unless otherwise stated:
- .yuv (UYVY, YUY2, YUV4:4:4, YUV4:2:0_planar)
- .v210 (10 bit, UYVY, 3 components in 32 bits)
- .rgb (BGR24, GBR24)
- .avi (uncompressed, BGR32 (discard alpha channel) / BGR24 / UYVY / YUY2 / v210)
- ARIB ITE format (4:2:0 planar with 3 separate files (.yyy))
- .vcap (created by PQA600A SDI video capture)
- .vcap10 (10 bit, created by PQA600A video capture)
The following compressed files are internally converted to an uncompressed file before measurement execution. The format support listed here is available in software version 4.0 and later.
Format | ES | ADF | MP4 | 3GPP | Quicktime | MP2 PES | MP2 PS | MP2 TS | MXF | GXF | AVI | LXF |
---|---|---|---|---|---|---|---|---|---|---|---|---|
H263 | X | X | X | X | X | |||||||
MP2 | X | X | X | X | X | X | X | X | X | |||
MP4 | X | X | X | X | X | X | ||||||
H264/AVC | X | X | X | X | X | X | X | X | X | X | ||
DV | X | X | X | X | X | X | ||||||
VC-1 | X | X | X | |||||||||
ProRes | X | |||||||||||
Quicktime | X | X | X | |||||||||
JPEG2000 | X | X | X | X | X | |||||||
VC3/DNxHD | X | X | X | X | X | |||||||
Raw | X | X | X |
Preinstalled video sequences
The following preinstalled video sequences are available:
Sequence | Resolution | Formats | Clips |
---|---|---|---|
Vclips | 1920×1088 | YUV4:2:0 planar | V031202_Eigth_Ave, V031255_TimeSquare, V031251_Stripy_jogger |
1920×1080 | UYVY | V031251_Stripy_jogger | |
1280×720 | UYVY, YUV4:2:0 planar | V031002_Eigth_Ave, V031055_TimeSquare, V031051_Stripy_jogger with 3/10/26 Mb/s | |
864×486 | YUV4:2:0 planar | Converted V031051_Stripy_jogger with 2/4/7 Mb/s | |
320×180 | YUV4:2:0 planar | Converted V031051_Stripy_jogger with 1000/1780/2850 Kb/s | |
PQA300 without Trigger | 720×486 | UYVY | Ferris, Flower, Tennis, Cheer with 2 Mb/s_25 fps |
720×576 | UYVY | Auto, BBC, Ski, Soccer | |
PQA300 with Trigger | 720×486 | UYVY | Mobile with 3/6/9 Mb/s |
720×576 | UYVY | Mobile with 3/6/9 Mb/s |
Preconfigured measurements specifications
All preconfigured measurements require Option BAS. Where noted below, some measurements also require Option ADV.
View video: No measurement
- View video
- Requires Option BAS
Subjective prediction: Full reference
- Noticeable differences
- Requires Option BAS
- Subjective rating predictions
- Requires Option BAS
- SD display and viewing measurement class
- "005 SD Broadcast DMOS" measurement
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node SD Broadcast CRT (ITU-R BT.500) NA Typical NA Default weightings DMOS Units Re: BT.500 Training - HD display and viewing measurement class
- "006 HD Broadcast DMOS" measurement
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node HD Broadcast CRT (ITU-R BT.500) NA Typical NA Default weightings DMOS Units Re: BT.500 Training - CIF display and viewing measurement class
- "007 CIF and QVGA DMOS" measurement
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node CIF/QVGA LCD 7 scrn heights, 20 cd/m^2 NA Typical NA Default weightings DMOS Units Re: BT.500 Training - D-CINEMA Projector and viewing measurement class
- "008 D-CINEMA DMOS" measurement
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node DMD Projector 3 scrn heights, .1 cd/m^2 NA Typical NA Default weightings DMOS Units Re: BT.500 Training
- Attention biased subjective rating predictions
- Requires Options BAS and ADV
- SD display and viewing measurement class
-
"009 SD broadcast ADMOS" measurement
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node SD Broadcast CRT (ITU-R BT.500) NA Typical NA Default weightings DMOS Units Re: BT.500 Training - HD display and viewing measurement class
- "010 HD Broadcast ADMOS" measurement
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node HD Broadcast CRT (ITU-R BT.500) NA Typical NA Default weightings DMOS Units Re: BT.500 Training - CIF display and viewing measurement class
- "011 CIF and QVGA ADMOS" measurement
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node CIF/QVGA LCD 7 scrn heights, 20 cd/m^2 NA Typical NA Default weightings DMOS Units Re: BT.500 Training - SD sports measurement class
- "012 SD Sports Broadcast ADMOS" measurement
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node SD Broadcast CRT (ITU-R BT.500) NA Typical NA Motion and Foreground Dominant DMOS Units Re: BT.500 Training - HD sports measurement class
- "013 HD Sports Broadcast ADMOS" measurement
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node HD Broadcast CRT (ITU-R BT.500) NA Typical NA Motion and Foreground Dominant DMOS Units Re: BT.500 Training - SD talking head measurement class
- "014 SD Talking Head Broadcast ADMOS" measurement
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node SD Broadcast CRT (ITU-R BT.500) NA Typical NA Motion and Foreground Dominant DMOS Units Re: BT.500 Training
- Repurposing: reference and test are independent
- Use any combination display model and viewing conditions with each measurement.
Requires Option BAS
- Format conversion: cinema to SD DVD measurement class
- "015 SD DVD from D-Cinema DMOS" measurement
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node DMD projector and SD CRT 7 scrn heights, 20 cd/m^2 and (ITU-R BT.500) NA Expert NA NA DMOS Units Re: BT.500 Training - Format conversion: SD to CIF measurement class
- "016 CIF from SD Broadcast DMOS" measurement
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node LCD and SD Broadcast CRT 7 scrn heights, 20 cd/m^2 and (ITU-R BT.500) NA Expert NA NA DMOS Units Re: BT.500 Training - Format conversion: HD to SD measurement class
- "017 SD from HD Broadcast DMOS" measurement
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node SD and HD Broadcast CRT (ITU-R BT.500) NA Expert NA NA DMOS Units Re: BT.500 Training - Format conversion: SD to HD measurement class
- "017-A SD from HD Broadcast DMOS" measurement
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node SD and HD Progressive CRT (ITU-R BT.500) NA Expert NA NA DMOS Units Re: BT.500 Training - Format conversion: CIF to QCIF measurement class
- "018 QCIF from CIF and QVGA DMOS" measurement
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node QCIF and CIF/QVGA LCD 7 scrn heights, 20 cd/m^2 NA Expert NA NA DMOS Units Re: BT.500 Training
- Attention
- Requires Option BAS
- Attention measurement class
-
"019 Stand-alone Attention Model" measurement
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node NA NA NA NA NA Default weightings Map units: % Probability of focus of attention
Objective measurements: Full reference
- General difference
- Requires Option BAS
- PSNR measurement class
- "020 PSNR dB" measurement
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node NA Auto-align spatial Selected NA NA NA dB units
- Artifact measurement
- Requires Options BAS and ADV
- Removed edges measurement class
-
"021 Removed Edges Percent" measurement
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node NA Auto-align spatial NA NA Blurring NA % - Added edges measurement class
-
"022 Added Edges Percent" measurement
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node NA Auto-align spatial NA NA Ringing / Mosquito Noise NA % - Rotated edges measurement class
-
"023 Rotated Edges Percent" measurement
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node NA Auto-align spatial NA NA Edge Blockiness NA % - % of original deviation from block DC measurement class
-
"024 DC Blocking Percent" measurement
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node NA Auto-align spatial NA NA DC Blockiness NA %
- Artifact classified (filtered) PSNR
- Requires Options BAS and ADV
- Removed edges measurement class
-
"025 Removed Edges Weighted PSNR dB" measurement
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node NA Auto-align spatial Selected NA Blurring NA dB units - Added edges measurement class
-
"026 Added Edges Weighted PSNR dB" measurement
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node NA Auto-align spatial Selected NA Ringing / Mosquito Noise NA dB units - Rotated edges measurement class
-
"027 Rotated Edges Weighted PSNR dB" measurement
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node NA Auto-align spatial Selected NA Edge Blockiness NA dB units - % of original deviation from block DC measurement class
-
"028 DC Blocking Weighted PSNR dB" measurement
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node NA Auto-align spatial Selected NA DC Blockiness NA dB units
- Artifact annoyance weighted (filtered) PSNR
- Requires Options BAS and ADV
- PSNR w/ default artifact annoyance weights measurement class
-
"029 Artifact Annoyance Weighted PSNR dB"measurement
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node NA Auto-align spatial Selected NA All artifacts selected NA dB units
- Repurposing
- Use View model to resample, shift, and crop test to map to measurement
Requires Options BAS and ADV
- Format conversion: Cinema to SD DVD measurement class
-
"030 SD DVD from D-Cinema Artifact weighted PSNR dB" measurement
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node NA Auto-align spatial Selected NA All artifacts selected NA dB units - Format conversion: SD to CIF measurement class
-
"031 CIF from SD Broadcast Artifact weighted PSNR dB" measurement
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node NA Auto-align spatial Selected NA All artifacts selected NA dB units - Format conversion: HD to SD measurement class
-
"032 SD from HD Broadcast Artifact weighted PSNR dB" measurement
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node NA Auto-align spatial Selected NA All artifacts selected NA dB units - Format conversion: CIF to QCIF measurement class
-
"033 QCIF from CIF and QVGA Artifact weighted PSNR dB" measurement
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node NA Auto-align spatial Selected NA All artifacts selected NA dB units
Attention-weighted objective measurements
- General differences
- Requires Options BAS and ADV
- PSNR measurement class
-
"034 Attention Weighted PSNR dB" measurement
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node NA NA Selected NA NA Default weightings dB units
Objective measurements: No reference
- Artifact
- Requires Options BAS and ADV
- Artifact measurement class
-
"035 No Reference DC Blockiness Percent" measurement
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node NA NA NA NA No-reference DC block NA % DC blockiness
- Subjective prediction calibrated by subjective rating
- Conducted in 2009 with 1080i29 Video Contents and H.264 CODEC (Refer to application note, 28W_24876_0.pdf)
Requires Option BAS
- 036 HD PQR ITU-BT500 with Interlaced CRT measurement
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node Custom HD CRT 3 scrn heights NA Custom NA NA PQR units - 037 HD DMOS ITU-BT500 with Interlaced CRT measurement
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node Custom HD CRT 3 scrn heights NA Custom NA NA DMOS Units Re:BT.500 Training - 038 HD ADMOS ITU-BT500 with Interlaced CRT measurement
1
Display model View model PSNR Perceptual difference Artifact detection Attention model Summary node Custom HD CRT 3 scrn heights NA Custom NA Typical DMOS Units Re:BT.500 Training
1 Requires Options BAS and ADV
Configuration nodes
- Display model
- Display technology: CRT/LCD/DMD each with preset and user-configurable parameters (Interlace/Progressive, gamma, response time, etc). reference display and test display can be set independently
- View model
- Viewing distance, ambient luminance for reference and test independently, image cropping and registration: automatic or manual control of image cropping and test image contrast (ac gain), brightness (dc offset), horizontal and vertical scale and shift
- PSNR
- No configurable parameters
- Perceptual difference
- The viewer characteristics (acuity, sensitivity to changes in average brightness, response speed to the moving object, sensitivity to photosensitive epilepsy triggers, etc)
- Attention model
- Overall attention weighting for measures, temporal (Motion), spatial (Center, people (Skin), foreground, contrast, color, shape, size), distractions (Differences)
- Artifact detect
- Added edges (Blurring), removed edges (Ringing/Mosquito noise), rotated edges (Edge blockiness), and DC blockiness (Removed detail within a block)
- Summary node
- Measurement Units (Subjective: Predicted DMOS, PQR or % Perceptual Contrast. Objective: Mean Abs LSB, dB)., Map type: Signed on gray or unsigned on black. Worst-case Training Sequence for ITU-R BT.500 Training (Default or User-application Tuned: Determined by Worst Case Video % Perceptual Contrast), Error Log Threshold, Save Mode