Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deal with latency #2

Open
afrind opened this issue Feb 11, 2022 · 6 comments
Open

Deal with latency #2

afrind opened this issue Feb 11, 2022 · 6 comments

Comments

@afrind
Copy link
Owner

afrind commented Feb 11, 2022

The group that is advancing this appears to be deflecting attempts by other participants to deal with latency. Should that be in scope for this work?

@SpencerDawkins
Copy link
Contributor

I'm not super clear on what's being discussed here. I'm imagining

  • either the scoping of latency as <500 ms, which is perhaps awkwardly between "ultra-low latency" in of <1 second in the mops streaming draft and <100 ms that @fiestajetsam and I have been talking about, or
  • casual discussions about wanting to provide better interaction between video and underlying congestion control mechanisms (after concerns about interactions between adaptive streaming and underlying congestion control).

Does anyone have a better understanding here? We may need to ping the IESG/IAB for an explanation, unless someone already has one.

@afrind
Copy link
Owner Author

afrind commented Feb 11, 2022

The latency requirements for the use cases described in the BoF proposal are not as strict as some other media use cases (eg interactive applications). Referencing @kixelated's most recent post to the MoQ list:

For example, a viewer with a reliable connection may have a 500ms buffer, while a viewer with a cellular connection may have a 2s buffer, while a viewer in a developing country may have a 5s buffer, while a service that archives the stream may have a 30s buffer for maximum reliability.

So 500ms might be the lower bound requirement for those use cases today. I wouldn't say that anyone is trying deflect attempts to deal with latency, but some have expressed that current protocols are inflexible when it comes to latency/quality tradeoffs. Further, I think there's a desire to capture the benefits QUIC can provide "off the shelf" in terms of loss recovery and congestion control without complicating things in the pursuit of latencies lower than this. I'm not really an expert though and would defer to Luke, Kirill, Victor etc to see if I am expressing this correctly.

@SpencerDawkins
Copy link
Contributor

@afrind, I suspect this question needs to be asked on the MOQ mailing list (when we post a pointer to this repo on the MOQ list, of course!)

@kixelated
Copy link

The developers most interested in Media over QUIC currently use higher latency protocols like RTMP and HLS/DASH. There's some who just want a maintained version of RTMP, but most developers want improved QoS and latency.

I'm of the opinion that the primary issue with RTMP and HLS/DASH is head-of-line blocking. It's tangential to HTTP and why QUIC was developed in the first place. This is why I've latched on to QUIC; to improve quality/latency by eliminating head-of-line blocking when possible (depends on the GoP structure).

That being said, there are absolutely developers who currently use WebRTC and would switch to a simpler protocol if it could offer real-time latency. I just think the flaws with RTMP and HLS/DASH are more prevalent than the flaws with WebRTC.

@kixelated
Copy link

Oh, and the biggest issue that I've seen with new video protocols is the latency strategy. There's either no focus on latency, so it's a wild west of custom implementations. Or there's a naïve belief that latency can be constant, ignoring the dynamics of the internet. I think that it's very important to set latency requirements.

@afrind
Copy link
Owner Author

afrind commented Feb 18, 2022

I feel like the evolution of the BoF agenda is giving us ample coverage to discuss the full range of latency requirements of different use-cases, and whether all latency targets are "in-scope" for whatever initial chunk of work we bite off, or whether we hope to address those in future revisions/extensions, or break them out completely.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants