-
Notifications
You must be signed in to change notification settings - Fork 513
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: FATAL: [Server] Unhandled rejection: SequelizeUniqueConstraintError: Validation error #3123
Comments
Is it crashing the container or just showing an error? |
It just hungs and reports 502 error on browser. Just applied this solution, which consists on a little kernel parameter: https://stackoverflow.com/questions/55763428/react-native-error-enospc-system-limit-for-number-of-file-watchers-reached Will check during a week and report back. |
Well, no more limit errors, but now i can reproduce this on every scan on this library:
And this does't hung the container, this directly makes it crash |
Can you share your docker config and if you are using remote file systems for the mounts |
Sure!
And yes, audiobooks, podcasts and books are stored on a remote NFS. It has full permissions for the UID/GID provided on the docker-compose file and over a 1G network. Do you need any more info? |
Are there supposed to be 2 slashes in the audio book mount? Not sure if that's a typo. |
Yes its a typo, just edited the real mountpoints before posting them here, edited and corrected it- |
From what I read above, it looks like you have watcher enabled and are running (automated?) library scans. First of all, since your libraries are NFS mounted, I suspect that the watcher is not working for you (which is likely why you need to run library scans). Watcher only works reliably when watched files and directories are on the local file system - when they're mounted on a remote NFS, watcher will appear to be running but will not get notified of file change events. Therefore, I'd advise disabling watcher (uncheck Settings -> Enable watcher). Once you do that, please let us know if your library scans are still crashing. |
Good morning!! Will try and report soon. Something to take into account: I have all my libraries using NFS shared storage (podcasts and audiobooks) with watchers enabled working fine. I also have a Jellyfin instance on the same server working with something similar to watchers (if not the same) enabled on my media directories with NFS shared storage also working fine for more than 4 years. The only difference between my other libraries is that ebook ones have more than 130000 files (which maybe is a lot for a sqlite database). Let me test and report back. Thanks a lot for the support :) |
Well, same issue as before, watchers disabled for ebooks library and scanned manually (logs on debug mode):
It scans like 135 books of +- 130000 and it just crashes on this constrain. |
If your watchers are working fine, why would you need an automated library scan? the Audiobookshelf watcher is supposed to detect any file changes in any of the directories of your library, and add books automatically when they're detectedm without requiring a library scan. Is it doing that? |
Maybe i didn't explain myself correctly: Watchers are disabled on that library scan, automated scans are also disabled, i'm just triggering a manual scan. Scan is what's crashing. The thing ends with that constrain error and reboots de container with an exit 0 |
OK, undetstood. Thanks for turning log level to Debug. I'm looking at the logs.
|
As with the other bug, I'd appreciate if you attach the whole server side log (including debug printings from the whole last scan until the crash). This kind of failure might hint at some race condition accessing the database, so I want to understand all that happened (and all that didn't happen) prior to the failure. |
No problem! Will grab them in an hour. |
Just the last day (which contains the last scan you ran manually).
…On Wed, Jul 10, 2024 at 8:04 PM hardwareadictos ***@***.***> wrote:
As with the other bug, I'd appreciate if you attach the whole server side
log (including debug printings from the whole last scan until the crash).
This kind of failure might hint at some race condition accessing the
database, so I want to understand all that happened (and all that didn't
happen) prior to the failure.
No problem! I have the logs from nearly all July. Do you want all of them?
—
Reply to this email directly, view it on GitHub
<#3123 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AFMDFVVWI7AJLNI6FVNXNLLZLVSSRAVCNFSM6AAAAABKNX5OA6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMRRGAZTMMBWGM>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Here you have it: thanks! :D |
One more question: has the library "Libros" been successfully scanned before, or were you never able to fully scan it? |
Never as it's really big, other libraries like podcasts and Audiolibros are being scanned fine. I even deleted and recreated Libros library to avoid issues creating it the first time without success |
OK, I believe I understand what is happening here. I think this is a specific issue with the book Can you possibly attach this specific epub here so I can examine it? |
Yeah, I was able to repro the crash by duplicating the series metadata lines in a random epub's opf file:
Then, scanning this epub I get the same crash:
|
Nice found!! Let me check it later, I will exclude this ebook from the library folder later and will reescan to confirm. Thanks for the effort. |
Okay i found the duplicates also and i deleted them with calibre editor: And boila, now it's scanning beyond the book n135 which was the culprit. Now i have another 129750 books to scan and "correct" if it founds another one broken. From my ignorance: is it posible on the app code side to skip repeated metadata tags and only keep the first one found? Don't know if it's posible, i would say yes because my ereaders can read this book with those repeated tags, but i don't know if that would compromise the app code :) Edit: I found another one (which was posible): |
Of course, something like this should never cause a crash. I'm working on a fix. |
Nice!! Then im goint to stop "fixing" the books jeje. Let me know when you find something, will gladly test :) |
The fix #3152 is in review. |
Nice!! Thanks a lot for the support, quick and effective, loving the app, waiting anxiously for the next release 👏👏👏 |
I can confirm i had the same issue and using the 'edge' (last) docker build this issue is now solved! Ty for the fix! |
Fixed in v2.12.0. |
I can confirm that this is working fine now!! Now i found another issue regarding big ebook libraries management, will open another issue |
What happened?
When adding a large amount of ebooks to a library (like 30000) container crashes with error ERROR: [Watcher] Error: ENOSPC: System limit for number of file watchers reached on library scan
What did you expect to happen?
Scaning taking long but not crashing
Steps to reproduce the issue
Just adding large amount of ebooks and scanning
Audiobookshelf version
2.10.1
How are you running audiobookshelf?
Docker
What OS is your Audiobookshelf server hosted from?
Linux
If the issue is being seen in the UI, what browsers are you seeing the problem on?
None
Logs
Additional Notes
No response
The text was updated successfully, but these errors were encountered: