-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: Better Input Transform #8
Comments
References: [1] Glenn Kennel, Digital Film Scanning and Recording: The Technology and Practice, SMPTE Journal, March 1994 [2] Giorgio Trumpy et al., Light Source Criteria for Digitizing Color Films, 2015 Color and Visual Computing Symposium [3] Ado Ishii, Color Management Technology for Digital Film Mastering, IS&T/SID 11th Color Imaging Conference |
I would go one step further an propose the following. |
As my experiment indicates, Resolve use only the matrices embedded in the DNG metadata instead of the matrices in an Adobe DCP file. This could be fixed by rewrite the DNG EXIF using exiftool. But still, the matrices should be calculated using densitometry readings as target, not scene 1931 xyz readings. |
Currently the image input of NamiColor is a RGB image from camera or scanner. The input color space transform will transform this RGB image into a standard RGB color space, or more specifically, Rec.2020 color space. And then perform channel alignments and other transforms on this image.
But here's the question: Why Rec.2020? Why not ACES AP0 or DCI-P3 or any other RGB space? Or, one step further, why an standard RGB space?
Since NamiColor gets its initial idea from the Cineon digital intermediate system, I believe we should take a look at the principle of the Cineon system.
The Scanning result of a Cineon scanner is what we called "Printing Desnity" or PD for short. This represents the target density when we print the negative image onto a standard intermediate stock(Kodak 5244 for example) using a contact printer. To accomplish this goal, Kodak designed the spectral response sensitivity of the Cineon scanner to approximate printing density response.
Of course, when using NamiColor, we are not targeting a total simulation of a contact print workflow. But we still can benefit from the idea of scanning density rather than a simple "RGB dye shot" which is calibrated to the target of a standard human observer looking at a back-illuminated camera negative.
So, a better input transform result for a negative scan would be, in my point of view, a approximation of a standard Status M densitometer, or a SMPTE RP 180 densitometer.
To accomplish this, we could do a calibration (if we are scanning using a camera setup) by measuring the spectral power distribution of the backlight, then measuring the spectral sensitivity of a digital camera. And then we can calculate a transform matrix which will transform the native RGB camera image into a Status M/RP 180 density image, using a least square root optimization method.
And for off-the-shelf scanners, we could do a spectral sensitivity prediction using a IT-8 target strip. And then do the same calibration process.
I believe this can produce a much more pleasing result than the current input color space transform method. But the implementation of the DCTL itself requires little change. We are just adding some more 3x3 matrice.
Maybe we could try this out on a sample setup and see what result we can get.
The text was updated successfully, but these errors were encountered: