-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fftpack → fft #381
fftpack → fft #381
Conversation
This is interesting. Tests are essentially all passing, except for framewise image on dataset number 5. I believe this case has been a heisenbug in the past, at least on OSX, but it's now affecting all platforms. What's strange is that the deviation is extremely non-uniform: some platforms it's off by about 7db, some 10, some 23. This leads me to believe the underlying cause is numerical instability in a least squares estimation, so that we're highly sensitive to random initialization and floating point ops, and this example just happens to be particularly good at triggering it. I'm not yet 100% certain, but that seems like the most likely explanation here since all other pasts are passing with the specified 0.01dB tolerance. |
Interestingly, dataset06 also fails on my dev machine, but only by 0.02dB (#376 issue). |
Ok, I've dug into this failure a little more. It's definitely a numerical stability issue. The offending case is When divided up into frames, we get numerical equivalency in framewise metrics everywhere except for one frame: (proposed rfft branch) >>> imageframe_scores[key]
[[1.756411590484389,
1.7597473682072609,
1.7602869182177234,
1.7594335093388715,
1.7595170180901967,
1.759276007646068,
1.7591737609125007,
1.759391260702998,
1.760218424402917,
1.7600362671204706],
[1.6992851561218936,
-8.959569954271648,
1.7050670369298637,
1.704666147437159,
1.7049902335582687,
1.7049335634150566,
1.70516553136305,
1.7047425405295848,
1.704666154642886,
1.7048499625553668]] compared to our target pre-computed scores: >>> expected_image_frames[test_data_name]
[[1.7564115904843982,
1.7597473682073619,
1.7602869182174874,
1.7594335093388682,
1.7595170180900648,
1.7592760076460983,
1.759173760912185,
1.7593912607029725,
1.7602184244029835,
1.7600362671204637],
[1.699285156121919,
1.7045012461538642,
1.7050670369289587,
1.704666147436396,
1.7049902335583038,
1.704933563419315,
1.7051655313634067,
1.7047425405294456,
1.7046661546430155,
1.7048499625703208]] The diff comes to:
I expect what's happening here is the silent frame of the reference signal is causing a severe instability in the matrix inversion, but not so severe as to trigger an exception here: I put in some debug statements, and we're definitely not hitting the lstsq code path. Replacing the /home/bmcfee/git/mir_eval/mir_eval/separation.py:793: LinAlgWarning: Ill-conditioned matrix (rcond=1.10403e-22): result may not be accurate.
C = scipy.linalg.solve(G, D).reshape(flen, nchan * nsrc, nchan, order="F") I'm now testing out a fix that re-casts this linalgwarning as an exception so that we can fall back on lstsq. So far it seems promising, updated push coming shortly. |
Confirmed that the revised linalgwarning catch and fallback fixes the issue in this case, at least on my machine. With any luck, this will resolve #376 and we won't have to raise tolerances further. Implementing this did require switching out np.linalg for scipy.linalg (due to richer error handling) - I don't think this should be a problem, but I'm noting it here. |
Well, bsseval is now about a million times slower, I'm guessing because it's properly detecting ill-conditioning now where it wasn't before. It's kind of a miracle that it worked previously, IMO. |
Adding the 🐛 label here because I'm becoming convinced that the present implementation shouldn't be trusted. To summarize where we're at:
The bsseval package works around this by explicitly regularizing the problem, which i think I had advocated for several zillion years ago. I'm not sure if we want to go down that road now, but I also don't see any reasonable way to preserve the current behavior. |
Codecov ReportAll modified and coverable lines are covered by tests ✅
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## main #381 +/- ##
==========================================
+ Coverage 95.67% 95.81% +0.13%
==========================================
Files 19 19
Lines 2936 2936
==========================================
+ Hits 2809 2813 +4
+ Misses 127 123 -4
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Thanks for all this sleuthing. Can we get an additional opinion from a source separation eval expert, maybe @faroit? |
As an aside, I do think we could also significantly accelerate everything here by rewriting it with proper vectorization instead of nested for loops. However, I don't have the fortitude for a yak shave of this magnitude. |
Yeah, it shouldn't be surprising. TBH I've never done a full deep dive on bsseval, but from the parts I've encountered, many of the quantities involved seem fundamentally plagued with numerical instabilities. They do seem fixable, eg by either regularization (ridge regression) or reconceptualizing the definition to account for rank-deficient projections (least squares), but neither of these lead to compatibility-preserving implementations. My casual bystander opinion here is that we really shouldn't be striving for preserving compatibility here, since the current implementation is obviously a bit broken. OTOH i'm not sure it's mir_eval's job to go about changing metrics without at least some due diligence (ie research and publications) evaluating the changes. I don't follow the source separation literature closely (maybe I should?) but I do seem to recall seeing at least a couple of papers proposing revisions to bsseval, maybe from @rabitt or some of the MERL folks? Maybe there's something there we could build on to improve the current implementation? |
…1.4. removed more padding
Circling back on this one today, and it's still pretty ugly. I've had my head down in the code for a little bit now, and am increasingly convinced that this whole thing needs a rewrite. (Relax: I don't want to do that in this PR.) To recap things, here's where we're at:
TLDR: fixing things made it both slower and more prone to failure. Is this progress? I don't know. But things are at least marginally more reproducible now, and we don't have silent failures anymore. So what should we do? I see a few options:
|
Thanks for the summary. I'm in favor of the ridge solution if it's generally accepted by the community, but I suppose we'd need to be careful about communicating what metric exactly we have implemented here (so that people don't erroneously compare to similar-but-different ridge-free implementations). Secondarily, how did it come to pass that there are parallel codebases? |
I totally agree in princple, but I don't think we can hope to avoid erroneous comparisons here. People are going to copypasta results tables from older papers, and there's really nothing we can do to stop them, with the possible exception of leaving the old-and-busted implementations as is and defining new terms for the stabilized implementations. But then again, the museval code is already stabilized and uses the same terms, so 🤷
TBH, I think a good portion of this comes down to us not being good enough at staying on top of maintenance and releases. There's also the issue of adding functionality, eg #68 #272 , which we let slide. Forks are inevitable when these things happen. Then there are total rewrites for speed / platform differences, eg https://github.com/fakufaku/fast_bss_eval/ https://github.com/fgnt/ci_sdr etc. |
Hm, yes, maybe based on #68 (comment) it would make most sense to remove it or just call out to museval? |
Lol, I'd forgotten that we'd already had this exact conversation before, and apparently agreed that this neither our circus nor our monkeys. 😆 What I think makes sense here is to implement a deprecation cycle, and then remove the module entirely. A shim to museval could be convenient, but I don't like it in general for two reasons:
@faroit what do you think? |
Alright, I'm closing this PR out to open an new one implementing the deprecation cycle. (And meanwhile relaxing the test tolerance to skirt around heisenbugs.) |
Fixes #373
This PR implements a number of small improvements to the separation module:
2**k
This should altogether get us about a 50% speedup in separation metrics and bring down the test build times considerably. I don't think this will affect accuracy at all. However, we should probably also roll in #376 once the dust settles.
This PR bumps the minimum supported scipy to 1.5.4 (from 1.4.0) to avoid a bug in the
next_fast_length
implementation for real-valued inputs. 1.5.4 was released in Nov. 2020, while 1.4.0 was Dec. 2019. These are both far enough in the past that I don't consider it a big deal, but I am noting it here in case it warrants discussion.