HDR Best Practices - what can/can't you do?

Discussions about research, Faceswapping and things that don't fit in the other categories here.


Locked
User avatar
Replicon
Posts: 50
Joined: Mon Mar 22, 2021 4:24 pm
Been thanked: 2 times

HDR Best Practices - what can/can't you do?

Post by Replicon »

Hey folks,

I wanted to learn more about HDR, and how to work with those videos if I want to use them, and what the limitations are.

Removing HDR:

The guide pretty much says, "there's no good way to de-HDR, without manually regrading" - that sounds laborious, BUT then there's this stackexchange answer: https://video.stackexchange.com/questio ... -hdr-video

It looks like there's something called ffmkv, which should be able to de-HDR it. Has anyone tried it?

I usually pre-process my input videos with KDEnlive, to avoid wasting cloud GPU time on chunks of a video I don't need... Has anyone found a way to de-HDR it in the process of rendering at this stage?

Symptoms of working with HDR:

  • Suppose I forget to de-HDR a video... Where will I notice the issues? It sounds like everything will work as normal, except for training; is that right? How "off" will it look? Will it be a complete mess, or will it just be "not as good as I thought it could be"?

  • If I have trained a model with all NON-HDR inputs, and use an HDR video as my final convert target, will that work, or also behave badly?

User avatar
bryanlyon
Site Admin
Posts: 793
Joined: Fri Jul 12, 2019 12:49 am
Answers: 44
Location: San Francisco
Has thanked: 4 times
Been thanked: 218 times
Contact:

Re: HDR Best Practices - what can/can't you do?

Post by bryanlyon »

That does not properly remove HDR. It's quite simply not possible to do it since HDR effectively throws out the baseline that the AI relies on and replaces it with a shrug.

The reason it's impossible to automatically remove HDR is that HDR works similar to how our eyes work, there is no objective baseline, light levels vary wildly and SUBJECTIVE brightness is what matters (Dynamic in technical speaking). Making content HDR is lossy -- information is irevocably and permanently lost. There is no way to recover that lost information.

The tools you read about in that article help to make HDR content playable on a non-HDR monitor without looking horrible, but do not actually recover the baseline.

In order to properly swap, the AI does need an OBJECTIVE baseline of brightness (REC 709 provides our baseline in this case). Without this baseline you're asking the AI to figure out not what the person looks like but how well it can predict the lighting changes of HDR. This means that model wont ever get around to learning the faces and you'll just end up with weird splotches of flesh-colored pixels.

User avatar
Replicon
Posts: 50
Joined: Mon Mar 22, 2021 4:24 pm
Been thanked: 2 times

Re: HDR Best Practices - what can/can't you do?

Post by Replicon »

Thanks for that info, that's really fascinating!

I tried experimenting a bit, and it looks like faceswap extract just doesn't work at all with HDR videos in the first place.

It just fails with "OSError: Could not load meta information" thrown from imageio_ffmpeg/_io.py while initializing.

As for the "converted" videos, it starts to run, but sometimes (not always) fails randomly (just says "killed" after extracting a handful of PNGs, no clue about what that could be).

Still, if I run ffprobe on my clips, it's actually revealing, and I wonder if it'll work or do something useful:

HDR clip:

color_space=bt2020nc
color_transfer=smpte2084
color_primaries=bt2020

SDR clip (converted hdr clip using ffmkv):

color_space=bt2020nc
color_transfer=bt709
color_primaries=bt2020

Cut/Rendered clip (used KDEnlive to cut out just the clips with the faces I want to use for my test):

color_space=bt709
color_transfer=bt709
color_primaries=bt709

I was only able to do an S3FD extract on that last set, with the others failing for reasons described above. I guess that's what I'll try to train with a basic lightweight, just to see if there's anything there, or if it's truly a complete mess. :)

User avatar
Replicon
Posts: 50
Joined: Mon Mar 22, 2021 4:24 pm
Been thanked: 2 times

Re: HDR Best Practices - what can/can't you do?

Post by Replicon »

Happy to report that the training did not make a huge mess, and while it's blurry, it's well within what I'd expect for a lightweight bs 4 on some fairly homogeneous face data, rather than a complete random mess.

Locked