-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Canvas 2D color management #646
Comments
Fantastic to see a proposal to color manage Canvas, and extend it beyond sRGB. 👍🏼 Here are some questions that occurred to me on initial reading.
|
Thank you so much for the quick look! Something I should have emphasized is that CanvasColorSpaceProposal.md is what was brought to WhatWG, and then the WhatWG PR is what came out of that review. It may be that I should update CanvasColorSpaceProposal.md to reflect those changes.
Indeed Chrome 92 will not have color(display-p3) et al. WCG content can be drawn to a 2D canvas via Images and via ImageData. When we were trying to decide which pieces to pick off first (CSS color vs 2D canvas), the balance came out in favor of canvas, for applications that wanted to ensure that their images weren't crushed to sRGB (even if all CSS colors were still limited to sRGB). Ultimately both are much more useful with each other.
For PaintRenderingContext2D, the actual output color space is not observable by Javascript (getImageData isn't exported). This is unlike CanvasRenderingContext2D, where the color space is observable (and has historically been a fingerprinting vector). Because of that, my sense is that the user agent should be able to select the best color space for the display device (just as it does for deciding the color space in which elements are drawn and composited), and potentially change that space behind the scenes. Having the application specifying a color space for PaintRenderingContext2D feels like an unnatural constraint. Similarly, ImageBitmap and ImageBitmapRenderingContext don't want color spaces -- one should just be able to create an ImageBitmap from a source and send it to ImageBitmapRenderingContext and, by default, have it appear the same as the source would have if drawn directly as an element. (Of note is that we will likely add a color space to ImageBitmapOptions to allow asynchronous-ahead-of-time conversion for when uploading into a WebGL/GPU texture, but that is outside of the 2D context).
Indeed for non-srgb-or-display-p3 spaces, we may want to default to something more than 8 bits per pixel. That's part of why we decided not to include rec2020 in the spec (the other part being disputes about its proper definition!!). For srgb and display-p3, the overwhelming preference is for 8 bits per pixel, and so the default of 8 bits per pixel will be what we will want to stay with (using more than 8 bits per pixel comes with substantial power and memory penalties, for almost no perceptual gain). As you noted, in the HDR spec, we may want to make a selection of color space imply a particular pixel format (I'm still on the fence about that -- fortunately we're avoiding being affected by how that decision lands -- display-p3 is the most requested space).
The input colors (like other inputs) are converted from the input's color space to the canvas's color space using relative colorimetric mapping, which is the "don't do anything fancy" mapping. In your example, the rec2020 color can always be transformed to some pixel in sRGB, but that pixel may have RGB values outside of the 0-to-1 interval. Relative colorimetric intent just clamps the individual color values to 0-to-1. This is what happens today in all browsers if the browser, e.g, loads a rec2020 image that uses the full gamut and attempts to display it on a less capable monitor. (Somewhat relatedly, one thing that came up in a separate review is that it might be useful for developer tools to have a "please pretend I have a less capable monitor than I do" mode).
The current behavior is that the subsequent call to getContext('2d') will return the previously created context, even if it has different properties than what was requested the second time around. This applies to all of the settings (alpha, etc).
Yes, this was another tricky area. There was some discussion around making the colorSpace be a mutable attribute, but there were a few things pushing against it. One was that there were indeed many reasonable things to do (clear the canvas, reinterpret_cast the pixels, convert the pixels?), and no single option was a clear winner. Another was that this matched the behavior for alpha (which will likely match the future canvas bit depth). Another was that it felt conceptually like a bad fit (especially in comparison with, e.g, WebGPU, where the GPUSwapChainDescriptor is the natural spot, and can be changed on frame boundaries). So that's how we ended up landing where we did. Does that feel reasonable to you too? In practice, if one wants to swap out a canvas for a differently-configured canvas, one can create the new element (or offscreen canvas) and drawImage the previous canvas into it (which will achieve the "convert" behavior). We also briefly discussed if it was possible for the canvas to automatically update its color space to support whatever is drawn into it (turns out it's not, at least not without scrutinizing every pixel of every texture that gets sent at it, and even then that may not be desirable).
Yes, this is a good point -- the WhatWG review changed this behavior (again, sorry I wasn't more clear about that earlier). The text that landed is what you suggest (getImageData returns the canvas's color space). Critically, getImageData, toDataURL, and toBlob have the property if that one exports a canvas into a (imagedata, blob, url), and then draws the result on the same canvas, the operation is a no-op (no data is lost ... unless you choose lossy compression).
Following alpha's pattern, it's query-able using getContextAttributes (it will be in the returned CanvasRenderingContext2DSettings). When creating a context, the color space for the context is set to the color space in the attributes, so all enum values that get past the IDL must be supported for 2D canvas and for ImageData. (Also, the proposal document advertised a feature detection interface, which was nixed in WhatWG review). If the browser doesn't support this feature at all, then there will be no colorSpace entry in CanvasRenderingContext2DSettings, so the feature may be detected through that mechanism. Thank you again for the quick feedback! |
I've updated the listed explainer to reference this document. This is the best place to look for a concise description of the formalizations and changes being proposed in this feature. This is a revised and streamlined version of the initial proposal, reflecting the changes made during WhatWG review. |
Relative Colorimetric is essentially a set of rules for how gamut mapping should happen, not a gamut mapping algorithm. The per-component clamping you describe does conform to RC, but is a very poor implementation of it. E.g. consider the sRGB color That said, Canvas is not the place to define how gamut mapping happens in the Web platform, and there are plans to flesh this out more in CSS Color 4. Meanwhile, please avoid prose that renders implementations non-conformant if they don’t use naïve clamping in the spec (in case there was any). But beyond how gamut mapping happens, there's also the question of whether it happens. The current behavior of restricting everything on a canvas to the gamut of the color space it's defined on is reasonable. Using the |
Yes, good point. And yes, particularly when extended into HDR, per-component clamping can create pretty poor-looking results.
Thanks for the heads-up. We can be softer on the language with respect to the particular gamut mapping algorithm in the canvas section (I had been trying to get that variable nailed down, but if that's getting taken care of in a more central effort, that would be better). FYI, a related topic, HDR tonemapping -- mapping from a larger luminance+chrominance range down to a more narrow one, comes up periodically in the ColorWeb CG HDR discussions.
With respect to Display P3, most (perhaps all?) users and use cases we encountered wanted the gamut capability of Display P3, rather than having Display P3 as a working space (they didn't mind having Display P3 as the working space -- it's "sRGB-like" enough that it comes with no surprises compared to the default behavior, but that wasn't the part of the feature they were most after). Allowing in-gamut and out-of-gamut colors requires having >8 bits per pixel of storage. That isn't much for a moderately-powerful desktop or laptop, but it is quite a burden (especially with respect to power consumption) for small battery-powered devices, and so most (I'm again tempted to say all?) users that I've encountered wanted Display P3 with 8 bits per pixel. (The rest of this might be getting a bit ramble-y, but it also might be some useful background on how we ended up where we did): In some of the very early versions of the canvas work we tried to separate the working color space from the storage color space. That ended up becoming unwieldy, and we discarded it -- it ended up being much more straightforward to have the storage and working space be the same. In practice, having a separate working space meant having an additional pass using that working space as a storage space, and so having the two not match ended up being downside-only. (There was one sort-of-exception, sRGB framebuffer encoding, which is useful for physically based rendering engines, but is very tightly tied to hardware texture/renderbuffer formats, and so we ended up moving it to a separate WebGL change, and those formats will also eventually find their way to WebGPU's GPUSwapChainDescriptor). We also discussed having some way to automatically allow arbitrary-gamut content that "just works", without having to specify any additional parameters, and without any performance penalties. One of the ideas was to automatically detect out-of-gamut inputs and upgrade the canvas. This one was discarded because it would add performance cliffs, would have a complicated implementation, and might not be what an application wants (e.g, if just 1 pixel is 1 one bit outside of the gamut, they may prefer it to be clipped rather than pay a cost). Another idea could be to use the output display device's color space, but that would then become a fingerprinting vector (and would also have the issue that the output display device is a moving target). |
Just noticed this -- so if I'm reading this right the colorSpace from the attributes will be |
Sorry, I might not have understood the context of the question (let me know if I miss it again here!). WRT the question of "in unsupported color spaces will this attribute be
There's also the case of a user agent that hasn't implemented this feature. In that case, there will be no |
To clarify my question further: I suppose user agents will implement this proposal by first implementing the |
Yeah, that's correct. |
Thank you. Is it correct to assume it would throw with the same error in subsequent calls to I.e. let ctx = canvas.getContext("2d", {colorSpace: "display-p3" });
let ctx2 = canvas.getContext("2d", {colorSpace: "flugelhorn" }); // throws? Another question that came up in a breakout this week. I do see some examples in the explainer use a media query to decide which color space to use. I assume however that the canvas color space and the display device color space are entirely decoupled, and therefore it's entirely possible to work on a P3 canvas, in a less capable (e.g. sRGB) display device. You would obviously not see the non-sRGB colors, but the underlying numbers would be unaffected. Is my assumption correct? |
Yeah (IDL enum validation happens prior to executing the method steps). And yeah, that's correct, the canvas color space and computations are its own thing and not impacted by any kind of global state. |
We reviewed this proposal this week and overall we are happy with the direction. We were initially troubled by some of the design decisions, but after discussing them further, we came to the same conclusions. Therefore, we are going to close this issue. We are looking forward to seeing this feature evolve further. |
Thank you for the review! Please feel free to reach out of there are any follow-up questions or related topics. |
I'm requesting a TAG review of Canvas 2D color management.
This was developed in the W3C's ColorWeb CG, and has been reviewed and updated in WhatWG review. I would like TAG to put their eyes on it too!
Summary: This formalizes the convention of 2D canvases being in the sRGB color color space by default, that input content be converted to the 2D canvas's color space when drawing and that "untagged" content is to be interpreted as sRGB. This adds a parameter whereby a 2D canvas can specify a different color space (with Display P3 being the only value exposed so far). Similarly, this formalizes that ImageData is sRGB by default, and add a parameter to specify its color space.
Further details:
We'd prefer the TAG provide feedback as (please delete all but the desired option):
💬 leave review feedback as a comment in this issue and @-notify ccameron-chromium
The text was updated successfully, but these errors were encountered: