Doesn't dynamic meta-data actually reduce dynamic range?
Discussion
Hi.
I can't help thinking that dynamic meta-data (Dolby Vision, HDR 10+) is actually going to reduce the dynamic range of an HDR programme.
Lets say you have a 500nits TV and material mastered at 1000nits. Lets say this material is a film which contains a series of escalating explosions, some sort of action film pitting our hero against an arsonist/pyromaniac type. As the film progresses and the explosions/fires get bigger they push the brightness up...starting at 350, the second fire is at 500...600...800 and finally the final explosion using all 1000nits
Now then, using traditional HDR 10 the TV will know the brightest thing in the film is @ 1,000 nits and apply appropriate tone mapping. My preference would be the way LG (and, I'm sure others) do it...matching input/output up to about 350 (say) and then rolling off the response...so 500 in is 400 out, 750 in is 450 out....until 1000 nits in is 500nits out. Doing this does mean the additional impact of the final explosion will be somewhat limited, but there will still be a progression. The second explosion will look a lot brighter than the first, the third will look a bit brighter etc etc. It's the best the TV can do to preserve the artistic vision.
Using dynamic meta data the TV will know the brightness of each scene...so the first two explosions will displayed as they should be ...at 350 then 500 nits. However the TV then has no where to go. The dynamic metadata will tell the TV the third explosion is 600 nits...but it's a 500 nits display so will just show it at 500 nits...as will the 4th and then final explosions. There is now no progression through the film, wiping out the artistic intent even though in theory the material has been displayed more accurately...
Do I have something wrong?
I can't help thinking that dynamic meta-data (Dolby Vision, HDR 10+) is actually going to reduce the dynamic range of an HDR programme.
Lets say you have a 500nits TV and material mastered at 1000nits. Lets say this material is a film which contains a series of escalating explosions, some sort of action film pitting our hero against an arsonist/pyromaniac type. As the film progresses and the explosions/fires get bigger they push the brightness up...starting at 350, the second fire is at 500...600...800 and finally the final explosion using all 1000nits
Now then, using traditional HDR 10 the TV will know the brightest thing in the film is @ 1,000 nits and apply appropriate tone mapping. My preference would be the way LG (and, I'm sure others) do it...matching input/output up to about 350 (say) and then rolling off the response...so 500 in is 400 out, 750 in is 450 out....until 1000 nits in is 500nits out. Doing this does mean the additional impact of the final explosion will be somewhat limited, but there will still be a progression. The second explosion will look a lot brighter than the first, the third will look a bit brighter etc etc. It's the best the TV can do to preserve the artistic vision.
Using dynamic meta data the TV will know the brightness of each scene...so the first two explosions will displayed as they should be ...at 350 then 500 nits. However the TV then has no where to go. The dynamic metadata will tell the TV the third explosion is 600 nits...but it's a 500 nits display so will just show it at 500 nits...as will the 4th and then final explosions. There is now no progression through the film, wiping out the artistic intent even though in theory the material has been displayed more accurately...
Do I have something wrong?
Forums | Home Cinema & Hi-Fi | Top of Page | What's New | My Stuff


