-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ability to record non-realtime / frame-by-frame #213
Comments
See also w3c/mediacapture-fromelement#28 (comment) (which links to relevant browser bugs discussions as well) |
The fact that this is not at all trivial to accomplish makes me go 😬 |
My use case is rendering video projects in a video editing app which is based on canvas and WebGL In general, the flow is: prepare the WebGL stage and render 1 frame, capture the frame somehow, go to the next frame, and repeat. (this can, and should happen faster than the duration of the final video itself - aka it should be possible to export 2-minute long video in 10 seconds if performance allows it) Currently, I use readPixels API, but it is the biggest bottleneck of the rendering pipeline as it requires pixels to be sent from GPU to CPU. It takes ~90% of render time which is quite surprising - all WebGL effects, blur filters, etc take 10% of the time and 90% of it is needed only to capture pixel data. Thus I was trying to find another method that avoids that and https://stackoverflow.com/questions/58907270/record-at-constant-fps-with-canvascapturemediastream-even-on-slow-computers/58969196#58969196 this was very promising - I hoped to create MediaRecorder I manually feed frame by frame as quickly as I possibly can where each frame is 1/FPS long I did create my recorder like: export function createCanvasRecorder(source: HTMLCanvasElement, fps: number) {
const target = source.cloneNode() as HTMLCanvasElement;
const ctx = target.getContext("2d")!;
ctx.drawImage(source, 0, 0);
const stream = target.captureStream(0);
const track = stream.getVideoTracks()[0] as CanvasCaptureMediaStreamTrack;
const recorder = new MediaRecorder(stream, { mimeType: "video/webm;codecs=H264", });
const dataChunks: Blob[] = [];
recorder.ondataavailable = (evt) => dataChunks.push(evt.data);
recorder.start();
recorder.pause();
return {
async captureFrame() {
const timer = wait(1000 / fps);
recorder.resume();
console.log("did resume");
ctx.clearRect(0, 0, target.width, target.height);
ctx.drawImage(source, 0, 0);
track.requestFrame();
await timer;
recorder.pause();
console.log("did pause");
},
async finish() {
recorder.stop();
stream.getTracks().forEach((track) => track.stop());
await waitForRecorderEvent(recorder, "stop");
return new Blob(dataChunks);
},
};
} and it seems to work. The point is MediaRecorder captures in real-time, thus I have to wait 1 FPS time before I go to the next frame - and now this is the biggest bottleneck and pure waiting takes the majority of the time - I cannot go to the next frame even tho everything is ready to do that. As a result - exporting 2-minute long video will never be faster than 2 minutes, even if it is easily doable in terms of rendering all frames faster |
Maybe you can try the Then you need to mux those frames into a video. I'm using this library: https://github.com/Vanilagy/webm-muxer, it allows you to specify timestamp for each frame. This method is much more flexible, and the muxer can be quickly replaced/updated interpedently from the browser. Here is a demo from that library: https://github.com/Vanilagy/webm-muxer/blob/main/demo/script.js |
You just reminded me that I had commented on this thread before, haha Look how far we've come 🥺 Also thanks for linking my lib!! |
Thanks for a great lib, @Vanilagy. Helps a ton! I can confirm that using it with MediaStreamTrackProcessor#getReader works as expected. Here's my example (wrapping @theatrejs): import { createRafDriver, ISheet, val } from '@theatre/core'
import { useRef } from 'react'
import WebMMuxer from 'webm-muxer'
export const rafDriver = createRafDriver({ name: 'Hubble rAF driver' })
export const useRenderer = ({
sheet,
fps = 30,
width = 1280,
height = 720,
bitrate = 1e6,
}: {
sheet: ISheet
fps?: number
width?: number
height?: number
bitrate?: number
}) => {
const { sequence } = sheet
const duration = val(sequence.pointer.length)
const totalFrames = duration * fps
// A way for the renderer to signal that the frame has finished drawing
const renderDone = useRef((_a?: unknown) => {})
const frameReady = () => {
return new Promise((resolve) => (renderDone.current = resolve))
}
const captureFrame = () => {
renderDone.current()
}
const startCapture = async ({ canvas }: { canvas: HTMLCanvasElement }) => {
let i = 0
let videoEncoder = new VideoEncoder({
output: (chunk, meta) => muxer.addVideoChunk(chunk, meta, i * fps * 1000),
error: (e) => console.error(e),
})
videoEncoder.configure({
codec: 'vp09.00.10.08',
width,
height,
bitrate,
bitrateMode: "constant"
})
async function encodeFrame(data: VideoFrame) {
const keyFrame = i % 60 === 0
videoEncoder.encode(data, { keyFrame })
}
async function finishEncoding() {
await videoEncoder.flush()
muxer.finalize()
reader.releaseLock()
await fileWritableStream.close()
}
const fileHandle = await window.showSaveFilePicker({
suggestedName: `video.webm`,
types: [
{
description: 'Video File',
accept: { 'video/webm': ['.webm'] },
},
],
})
const fileWritableStream = await fileHandle.createWritable()
const muxer = new WebMMuxer({
target: fileWritableStream,
video: {
codec: 'V_VP9',
width,
height,
frameRate: fps,
},
})
await sheet.project.ready
const track = canvas.captureStream(0).getVideoTracks()[0]
// @ts-expect-error
const mediaProcessor = new MediaStreamTrackProcessor(track)
const reader = mediaProcessor.readable.getReader()
for (i = 0; i < totalFrames; i++) {
sequence.position = i / fps
rafDriver.tick(performance.now())
console.log(`capturing frame ${i}/${totalFrames} at simtime ${i / fps}`)
await frameReady()
// @ts-expect-error
track.requestFrame()
const result = await reader.read()
const frame = result.value
await encodeFrame(frame)
frame.close()
}
finishEncoding()
}
return { startCapture, captureFrame }
} |
This issue was mentioned in WEBRTCWG-2024-02-20 (Page 31) |
Hi all - I'm trying to capture a canvas with expensive draw computations (e.g. variable-length network I/O and complex 3D rendering) asynchronously, separating out the render work from the capture. Essentially I'm looking for a way to separate out the video container's timestamp from the wall timestamp, similar to the CCapture library.
Reduced example:
The Media Capture from DOM Elements spec says that a call to
canvas.captureStream
with a frameRate of 0 allows users to add frames to the stream manually withtrack.requestFrame()
and this comment seems to suggest that the MediaRecorder simply reads output from the stream. But in practice it appears as if the MediaRecorder spec records in real-time, using the wall clock and leading to a choppy output video.I tried to call the
pause
and thenresume
MediaRecorder methods in mycapture
method before and after therequestFrame
, but this seemed to create a corrupt output.Example: https://jsfiddle.net/akre54/71aonkeb/ - Change the delay value for
setTimeout
on line 65.What currently happens: the captured video reflects the content of the canvas as it was drawn in real-time (i.e. choppy and with the setTimeout delays incorporated). What should happen: the output video is always the same duration, with smooth playback.
Any suggestions for how to accomplish this? Thanks!
Related: #177 #114, #166, w3c/mediacapture-main#575, discourse#2308
Edit: It appears what I'm trying to do is concatenate the Blobs returned from the
dataavailable
event handler into a final usable video container. The other issues are asking for concrete use cases, so I'm happy to dive into it, but I'd also just like to know if there is a way to encode from the canvas to, say, ts-ebml or webm-wasm if the MediaRecorder API is purely focused on realtime / wall clock use cases.The text was updated successfully, but these errors were encountered: