Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Return WebGPU-appropriate projection matrices #12

Open
toji opened this issue Sep 5, 2024 · 1 comment
Open

Return WebGPU-appropriate projection matrices #12

toji opened this issue Sep 5, 2024 · 1 comment

Comments

@toji
Copy link
Member

toji commented Sep 5, 2024

This was touched on in #7 and #8, and has an issue on the main spec in immersive-web/webxr#894 but I wanted to call it out in it's own issue here to ensure that it get's handled/discussed appropriately.

Unlike WebGL, which uses a [-1, 1] depth range for it's clip space coordinates, WebGPU uses a [0, 1] depth range. This means that if projection matrices that are designed for WebGL are returned from the API and used without modification the results will be incorrect.

My proposal for this is that, assuming we have a "API mode switch" as discussed in #7 the projectionMatrix of every XRView produced by the session simply begins returning the correct projection matrix for the API. Something like so:

const session = await navigator.xr.requestSession('immersive', {
  requiredFeatures: ['webgpu'], // Not the finalized API shape!
}); 

session.requestAnimationFrame((time, frame) => {
  const viewer = frame.getViewerPose(xrReferenceSpace);
  for (let view of viewer.views) {
    const projectionMat = view.projectionMatrix; // Is a [0, 1] range matrix because of 'webgpu' required feature.
    // And so on...
  }
});

Fairly obvious, I think, outside of determining the exact mechanism for specifying the graphics API.

For the sake of completeness: Alternatives could be to begin returning a second WebGPU appropriate matrix alongside the current one (projectionMatrixGPU? projectionMatrixZeroToOne?) if we felt there was any benefit to having both, but I don't see what that would be and it would create a pit of failure for developers porting their WebXR content from WebGL to WebGPU. We could also just tell developers to do the math to transform between the two themselves, but that would be rather petty of us when we could just solve the problem so easily for developers.

@mwyrzykowski
Copy link

Yes, I think the WebXR specification should be updated to say the coordinate systems, matrices returned, etc, conform to the underlying rendering framework which is used as part of WebXR. Or alternatively, explicitly state what would be given when using WebGL and when using WebGPU.

So if WebGL is used, you get WebGL NDC spaces and appropriate matrices for use in a WebGL based rendering application.

And if WebGPU is used, you get similarly appropriate matrices for NDC space [(-1,-1,0), (1,1,1)] as indicated here along with expected depth ranges.

It would make no sense to require a WebGPU site which wants to be immersive to map back and forth between WebGL's conventions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants