Hardware
4K for Broadcast: Is it Worth the Expense?
I expect most people have a general understanding of the law of diminishing returns. This is the point where level of benefit is less than the amount invested. In engineering we see it all the time where the next level of incremental technical improvement is imperceptible to the consumer and the cost of implementing it is significant. In my career, I have had to coach engineers to recognize and accept that while we may be able to get another tenth of a dB reduction in noise in a transmitter, it will not have any appreciable impact on the service to the viewer or listener and therefore may not be worth doing. Engineers, myself included, hate that concept because it is those small, hard to achieve improvements that really exercise our brains and skills. Coming to grips with this can be quite a challenge and requires some careful consideration in both the short term and the long term.
At the 2017 International Symposium on Broadband Multimedia Systems and Broadcasting in Cagliari, Italy recently Dr Peter Siebert, executive director of the DVB Project in Geneva, Switzerland, presented an interesting keynote address where he asked a couple of very compelling questions. The first was whether or not 4K resolution is the broadcast equivalent of the emperor’s new clothes? For those not familiar, this Hans Christian Andersen tale is about two tailors who promise to weave a suit for the emperor that will be invisible to anyone who is unfit for their position, stupid, or incompetent. The emperor is vain enough to believe this claim and once he is outfitted in these new clothes, he parades before his subject all of whom are unwilling to say that they do not see the new suit for fear of being judged unfit, stupid, or incompetent. It is only when a child with no fear of being judged proclaims that the emperor is not wearing anything, and others begin to pick up the cry. Interestingly, in the story, the emperor suspects that the cry is true but continues the parade, probably out of pride or vanity.
The distance test. So what does this have to do with 4K resolution? We all know that our eyes’ ability to perceive resolution on a display is a function of the proper viewing distance for the size of the screen and the horizontal lines. The rule of thumb for HD is three times picture height, which varies slightly depending on whether the screen is 720 or 1080 lines. For UHD-1, the viewing distance is one and a half times the screen height. In Dr Siebert’s presentation he provided some research conducted at IRT where UHD content was down converted to three variations of high definition and then upconverted back to UHD and then shown on a 56-inch UHD display. The researchers ran two versions of the test comparing the native content on the native resolution display as well as the three HD versions upconverted to UHD. One version was at the proper viewing distance for a 56-inch display and the other version at 2.7 meters (almost 9 feet) from the display. At the proper viewing distance, the 720p and 1080i content was perceived to have over a half point worse quality on the ITU 5-point quality comparison scale compared with the native UHD. Even the 1080p content was viewed as just under a half point worse. However, at the 2.7 meter distance, all three versions of the HD upconverted content showed less than a half point difference when compared to the native UHD content. When considering all of the data presented, Dr Siebert said, “When comparing UHD-1 resolution with 1080p50 there is a performance improvement of about 0.5 point.” This is a statistically relevant, but nevertheless minor improvement when going beyond HD resolutions. What does this mean for broadcast? Well, we must consider the laws of diminishing returns. We clearly know that in order to broadcast UHD, we will require more resources such as channel capacity with resolution being the largest consumer of extra overhead. We also know that with very few exceptions, consumers are sitting further away from their televisions than the proper viewing distance. So the operative question becomes whether or not the perceived quality improvement, which the IRT scale indicates is statistically relevant, but small, is worth the resource overhead needed to broadcast the higher resolution?
DR in a 1080p World
Before answering this question it may be important for you and your decision-making team to remember that UHD-1 is a bouquet of capabilities. Obviously more pixels is one of the components, but so are high dynamic range (HDR), wide color gamut (WCG), and high frame rate (HFR). With the exception of resolution, all of these other capabilities can be deployed in a 1080p system, which the UHD forum includes in their UHD phase A content parameters. While I suspect HDR and WCG could be applied to 720p and 1080i HD formats, I am not aware of anyone showing work in this area. Given this fact, it is quite understandable why some broadcast organizations are considering the idea of moving to a 1080p resolution but incorporating HDR and WCG in their plans and allowing the upconversion to take place in the display. So it is important for those of us making the decision to consider all the factors and make a sound choice rather than the vain choice made by the emperor.
The other interesting question that Dr Siebert raised had to do with future advancements and the limits of the human visual system. Remember that UHD-1 has the four components of 4K resolution, HDR, WCG, and HFR. During his presentation Dr Siebert presented experimental results, which indicated that in each of the latter three elements the technology is approaching the limits of our visual systems. Now that is not to say they are at the limits, but again we have to consider the law of diminishing returns and for me, this is where it gets interesting. If I think strictly in terms of conventional broadcasting, then future improvements would seem to be on the wrong end of the scale. The incremental improvements in color space, dynamic range, and frame rate delivered to a standard UHD display may indeed be so small as to be imperceptible and therefore may not be worth the investment in resources. But what about the future? What will the displays of the future be? Holographic? Will we come up with technologies that allow us to enhance the human visual system or bypass it entirely and directly stimulate the visual cortex? If so, will we want the content that we are creating today to have value to the consumers of that content in the future? If so, how do we insure that there is sufficient information available?
Now the law of diminishing returns becomes a little more complex because part of the equation has to do with the projected or proposed long-term value of the content. The rule of thumb I have always applied to content creation is if the content has long-term value, it should be created at the highest quality possible based on the resource availability. I include metadata in the quality metric because in the future it may be even more important to the value of the content then the essence. Dr Siebert noted in his presentation that time did not allow him to address the considerable capabilities and impact of next-generation audio. My own research and experience is that audio is equally important to the consumer’s total quality of experience. Remember, in the future, the content we are creating today may be consumed on what is the equivalent of a holodeck and that metadata will be used to map the 2D essence in 3D space.
The author Bill Hayes is director of engineering for Iowa Public Television, and the article was first published in an IEEE Broadcast Technology Society paper.
You must be logged in to post a comment Login