-
Notifications
You must be signed in to change notification settings - Fork 103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Jellyfin] Render ahead (oneshot render) #111
Conversation
@dmitrylyzo could you resolve conflicts so we can get this moving? additionally, documenting some examples of using renderahead would be nice, maybe in the pages showcases? |
Thanks again for working on this! Here are some remarks after just a first cursory look:
|
iirc, it was tested on heavy animated subtitles and these features were born in between. They are too dependent on the oneshot render code base.
This is a single mode and it uses JavascriptSubtitlesOctopus/src/post-worker.js Line 260 in 61c8997
It can erase animation tags, but I haven't tested that. Jellyfin doesn't use it. So it was probably only tested in the dev cycle. Current Jellyfin options
This? JavascriptSubtitlesOctopus/src/subtitles-octopus.js Lines 23 to 25 in 61c8997
if prescaleTradeoff is null (by default), hardHeightLimit is only used.JavascriptSubtitlesOctopus/src/subtitles-octopus.js Lines 705 to 708 in 61c8997
This is probably done to avoid wasting time rendering too high quality subtitles. |
I've been busy, so I wasn't able to leave feedback on this. Is there any reason to force the use of blend render? It performs a lot worse than fast render since it can't benefit from hardware acceleration. I think it should inherit the rendering mode which was user specified on init, rather than force one. Also from my testing SO performed worse with pre-rendering on releases that had >300 bitmaps per frame, but I'm not too sure if I made some sort of mistake, or if it simply can't handle it. Shouldn't all the pre-render logic be handled on the worker? Doing it on the main thread seems very wasteful. IMO, the default hard size cap should be set to 2k, for retina displays. |
Yes, this. Imho by default it should just use the physical canvas size without any cappping as this is the most intuituve behaviour.
But the settings themselves do not depend on |
Cherry-picked from: 848a686
Cherry-picked from: 65fbfec
Cherry-picked from: 5430cc9
Cherry-picked from: 4e42502
Cherry-picked from: 90910bd
…ed or resized) * Add bogus events covering timeline where there is no subtitles displayed * Fix seeking support Cherry-picked from: 21c84cc
Cherry-picked from: 6980deb
Cherry-picked from: 6d3a5c7
Cherry-picked from: 7fc34cf
Cherry-picked from: cea2dc9
Cherry-picked from: 88d2df4
Cherry-picked from: bd8bcab
Cherry-picked from: ecd47dd
Cherry-picked from: 67947f3
Cherry-picked from: 9dd6eb2
Cherry-picked from: e1604cb
Cherry-picked from: a8d670e
…lite mode Also marked some methods as const Cherry-picked from: d3bc472
Cherry-picked from: 90d4848
…to C++ level Cherry-picked from: a66e797
Cherry-picked from: 971a997
Cherry-picked from: a166606
…p using old size Cherry-picked from: 825c999
Cherry-picked from: 345d701
|
Yes, it won't restart until you drop the end marker (assuming events are appended). |
yeah, with how wonderful streaming is, it might not necessarily be appended, so even that wouldn't work, every time you add an event you'd need to reset the entire cache, which is counter productive |
Defaults are supposed to work well everywhere, not just on some special hardware+software combination. |
yes, that is what I'm saying, WASM blend won't work for high intensity TS |
Maybe I should've been more clear: Defaults should work reasonably well on all devices and engines. WASM-Blending does and so far seems to always be better then the current default. |
lossy performs well on 70% of cases, according to 2021's browser market shares [chrome, edge, firefox], from my testing WASM blending had much more mixed results than fast or even normal, I'd have to re-run these tests after my performance improvements vs this new blend, because the current normal or fast performance is a joke Edit: the way I do it in my version of SO, is default to threaded offscreen fast, with fallbacks: Edit2: admittedly this data is biased, but it should be up to the developer to analyze their target audience, and choose the best mode based on that, but by default, we should target the biggest userbase, especially considering how big of an issue heavy TS is for this library, and what kind of reputation it gave this library [blame CR] |
so what's currently stalling this? |
As an original author of some of these things, I would say that WASM blend should perform much better than browser-based on the cases I was mainly optimising it for - older browsers and funny engines like WebOS. I was specifically trying to make it fast on ancient Chrome like 27 or so hoping that new hardware and software would do better anyway 🙃 That said, I would suggest to default to WASM blend on some old-ish browsers (like, those shipped earlier than, say, 2019 or smth) or when hardware acceleration is turned off. Ideally it should be decoupled from rendering ahead, though, so be able to render subs in sync with video (instead of hoping that rendering one frame would be so fast no one would notice a lag). And thanks a million to @dmitrylyzo to taking this up! |
A weird thing I also noticed with fast render, is while total render time [not sum of all frame time, but |
Merging? |
Thanks for the reply!
Thanks to both you for originally writing this and dmitrylyzo for porting and fixing it. :) |
I'm waiting for @rcombs to take a look at the patches, after which this may either need more mending, be merged or need to be split up more afterall. Be patient, as this is quite a lot and additionally hard to review. (Now also marking this in the GitHub-UI) |
This PR remains too large to review practically in a single shot. Furthermore, some of the later commits are fixes for bugs introduced in previous commits (e.g. 10c35c5 appears to fix a bug introduced in or after 95bff90), meaning that the repo state is broken between commits, which makes this unsuitable for merge as-is. Never break functionality in one commit and then fix it in another within the same PR; just rebase and fix the broken commit. Some changes from this are not actually relevant to this branch, and can be broken up into smaller PRs for separate review and merge review; for instance:
Furthermore, some of these commits refactor code that was added in previous commits within the same PR. This makes commit-by-commit review effectively useless. For instance, f38fc7e deletes large amounts of code added in 69906fc. Please squash your commits down appropriately. Meanwhile, many of these commits add both base-level infrastructure and calls into it in the same commit. This makes review more difficult, and also would make future bisects harder. Implement new C++ functionality in one commit, then call it from JS in another, with each split up into as many commits as appropriate (for instance, adding a new class should usually be a commit on its own). |
Most of the commits are not mine - I am just porting a feature from another repo and trying to keep the original authorship of the code. FPS change was added after #111 (comment) |
I think the original commit structure doesn't matter, just what what does, as long as you put justaman as a co-author it should be fine, but from what I talked about with rcombs, a feature/functionality shouldn't be split into more than 1 commit, or more simply the same code can't be changed in more than 1 commit, and unfortunately this won't be merged until those requirements are met ;-; so if I was you I wouldn't worry that much about the original commit structure, just focus on readability |
I'm fine with any editing of the commits, I can recede the "code ownership/authorship" to anyone wanting to maintain this if this helps going further. I did this in the first place "for the greater good", not for the fame or anything. |
There's some value in keeping a close mapping to the original history since it was already deployed publicly for over a year; that's why this form wasn't rejected immediately. However, the non-atomicity and interleaving of multiple changes is too detrimental to the maintainability and bisectability in the long term, while in the short term it makes it very hard to properly review (as the long response times show), making bugs slipping in more likely (we already identified & fixed several bugs, possibly there are even more) and hampering a merge. . I think, now that we're no longer trying to emulated the original jellyfin-history, this can be split into at least four series (in whatever order is most convenient):
Thanks again for your work on this JustAMan and dmitrylyzo! Just so it isn't buried under all the comments posted since, here are again the future todos previously identified. The first three (as noted above) and the consolidation of
|
After squashing, you lose all messages of intermediate commits and probably info about Bisecting: there is probably not much difference between a single (squashed) commit and a merge commit when using the The
Since I will port using the same I think you need someone with a more flexible mind (to port all of this). EDIT: |
Adding the animation detection and the option first and then later reusing the detection for
There is a significant difference in all but single-commit merges. A proper patch series consists of well-documented atomic commits, meaning:
This not only makes review easier, it also means the patch-series itself can be bisected and when you get a result, the amount of code needed to analysse for the source of the bug is much smaller and you have plenty of explanation for what it wants to do and why it was done the way it is, helping in understanding the code and making it less likely new bugs are introduced in an attempt to "fix" the bug currently at hand. Sometimes this can mean the commit message is longer than the actual code change like here. For an example of how the commits are supposed to be split up (but all commits are almost trivial) see the
If relevant this info can be added to the commit message.
Pointing fingers to blame when a bug is found is not productive. Fixing the bug is. Often having small atomic commits also means there's only one author per commit, and you'd also “know who to blame”. Here we don't as the original implementation does not consist of atomic commits and is known to be buggy. Keeping the code maintainable and as easy to fix and bisect as possible is more important than “know[ing] who to blame”. |
Original author: @JustAMan
This PR adds a
Render ahead
mode that improves the accuracy of event timing by rendering a set of images in advance so that the frame is ready in time.Jellyfin uses this mode.
I cherry-picked related commits, skipping merge-commits.
List of commits
848a686
65fbfec
5430cc9
4e42502
90910bd
21c84cc
6980deb
6d3a5c7
7fc34cf
cea2dc9
88d2df4
bd8bcab
ecd47dd
67947f3
9dd6eb2
e1604cb
Merge pull request #6 from JustAMan/improve-timing-accuracy
a8d670e
d3bc472
90d4848
a66e797
971a997
a166606
Merge pull request #8 from JustAMan/lite-mode
825c999
345d701
4c5f018
bcf4b5f
5256b63
Merge pull request #11 from JustAMan/limit-canvas-size
cc9575c
93f4d0f
0722559
Merge pull request #12 from JustAMan/fix-resize
7eefe74
595d0e2
Merge pull request #15 from JustAMan/update-build
c66e7b1
Merge pull request #19 from Teranode/master
0e9e792
9eac714
Changes
Add
Render ahead
mode.