Replies: 1 comment 1 reply
-
A better setup would probably be to leverage a true distributed filesystem :) But given that is a big lift for a small setup I'm not seeing a better alternative. If you didn't need the filesystems accessible to multiple clients then I'd suggest not using a network filesystem but SAN of some sort. iscsi for example. Though you could still perhaps set that up. Export block devices over the network to a system that mounts them, merges them, and exports them over NFS. Perf would probably be better. However, the system dynamics change a bit wrt errors. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have two storage servers, each with their own disks with different data.
On each server, I have set up the disks as accessible via nfs shares between each other. I have also set up mergerfs, on each, so both their local disks and the nfs mounts are mounted as one mergerfs filesystem on their own. Such that:
ServerA nfs share and local disk:
/mnt/serverA/disk1
On ServerB, this share is mounted to:
/mnt/serverA/disk1
On ServerB, local disk:
/mnt/serverB/disk1
On ServerB, mergerfs:
/mnt/serverB/disk1:/mnt/serverA/disk1 /mnt/mergerfs/fs1
Same is done on ServerA side.
My reasoning for this configuration is both servers have *arr containers handling their own content and I'm trying to avoid limiting available disk space based on what they have locally.
In addition, these separate drives are also shared to a remote media server, that mounts them as nfs shares then also has mergerfs configured to combine them as a single mount.
Is there a better approach to this kind of setup?
Beta Was this translation helpful? Give feedback.
All reactions