💬 We offer consulting services to set up, secure, and maintain ArchiveBox on your preferred storage provider.
We use this revenue (from corporate clients who can afford to pay) to support open source development and keep ArchiveBox free.
ArchiveBox supports a wide range of local and remote filesystems using rclone
and/or Docker storage plugins. The examples below use Docker Compose bind mounts to demonstrate the concepts, you can adapt them to your OS and environment needs.
Example docker-compose.yml
storage setup:
services:
archivebox:
...
volumes:
# your index db, config, logs, etc. should be stored on a local SSD (usually <10Gb)
- ./data:/data
# but bulk archive/ content can be located on an HDD or remote filesystem
- /mnt/archivebox-s3/data/archive:/data/archive
- README: Archive Layout
- Wiki: Usage (Disk Layout)
- Wiki: Usage (Large Archives)
- Wiki: Security Overview (Output Folder)
- Wiki: Publishing Your Archive
- Wiki: Upgrading or Merging Archives
- Wiki: Troubleshooting Filesystem Issues
Tip
These default filesystems are fully supported by ArchiveBox on Linux and macOS (w/wo Docker).
Tip
This is the recommended filesystem for ArchiveBox on Linux, macOS, and BSD (w/wo Docker).
apt install zfsutils-linux
Provides RAID, compression, encryption, deduping, 0-cost point-in-time backups, remote sync, integrity verification, and more...
- https://openzfs.github.io/openzfs-docs/
- https://openzfs.github.io/openzfs-docs/man/v2.2/8/zpool-create.8.html
- https://openzfs.github.io/openzfs-docs/man/v2.2/8/zfs-create.8.html
- https://docs.docker.com/storage/storagedriver/zfs-driver/
- https://www.ixsystems.com/blog/fast-dedup-is-a-valentines-gift-to-the-openzfs-and-truenas-communities/
# create a new archivebox pool to hold your dataset
zpool create -f \
-O mountpoint=/mnt/archivebox \
-O sync=standard \
-O compression=lz4 \
-O recordsize=128K \
-O dnodesize=auto \
-O atime=off \
-O xattr=sa \
-O acltype=posixacl \
-O aclinherit=passthrough \
-O utf8only=on \
-O normalization=formD \
-O casesensitivity=sensitive \
archivebox /dev/disk/by-uuid/disk1... /dev/disk/by-uuid/disk2...
# create the archivebox/data ZFS dataset
zfs create \
-o mountpoint=/mnt/archivebox/data \
archivebox/data
# optional: add encryption
-o encryption=on \
-o keysource=passphrase,prompt \
Warning
These filesystems are likely supported, but are not officially tested.
Caution
Not recommended. Cannot store files >4GB or more than 31k ~ 65k Snapshot entries due to directory entry limits.
ArchiveBox supports many common types of remote filesystems using RClone, FUSE, Docker Storage providers, and Docker Volume Plugins.
The data/archive/
subfolder contains the bulk archived content, and it supports being stored on a slower remote server (SMB/NFS/SFTP/etc.) or object store (S3/B2/R2/etc.). For data integrity and performance reasons, the rest of the data/
directory (data/ArchiveBox.conf
, data/logs
, etc.) must be stored locally while ArchiveBox is running.
Important
data/index.sqlite3
is your main archive DB, it must be on a fast, reliable, local filesystem which supports FSYNC (SSD/NVMe recommended for best experience).
Tip
If you use a remote filesystem, you should switch ArchiveBox's search backend from ripgrep
to sonic
(or FTS5
).
(ripgrep
scans over every byte in the archive to do each search, which is slow and potentially costly on remote cloud storage)
docker-compose.yml
:
services:
archivebox:
volumes:
- ./data:/data
- archivebox-archive:/data/archive
volumes:
archivebox-archive:
driver_opts:
type: "nfs"
o: "addr=some-remote-server.example.com,nolock,soft,rw,nfsvers=4"
device: ":/archivebox-archive"
docker-compose.yml
:
services:
archivebox:
volumes:
- ./data:/data
- archivebox-archive:/data/archive
volumes:
archivebox-archive:
driver: local
driver_opts:
type: cifs
device: "//some-remote-server.example.com/archivebox-archive"
o: "username=XXX,password=YYY,uid=911,gid=911"
# install the RClone and FUSE packages on your host
apt install rclone fuse # or brew install
# IMPORTANT: needed to allow FUSE drives to be shared with Docker
echo 'user_allow_other' >> /etc/fuse.conf
Then define your remote storage config ~/.config/rclone/rclone.conf
:
Tip
You can also create rclone.conf
using the RClone Web GUI: rclone rcd --rc-web-gui
# Example rclone.conf using Amazon S3 for storage:
[archivebox-s3]
type = s3
provider = AWS
access_key_id = XXX
secret_access_key = YYY
region = us-east-1
- SMB / Ceph / SFTP / FTP / WebDAV (e.g. Nextcloud)
- Google Drive / Dropbox / OneDrive
- Amazon S3 / Backblaze B2 / Cloudflare R2 / DigitalOcean Spaces
- Google Cloud Storage / Azure Blob / Azure Files
- Storj / Sia / Archive.org Storage
- And many more...
Bonus:
- Set up gzip compression: https://rclone.org/compress/
- Set up file encryption: https://rclone.org/crypt/
- Set up hashing engine: https://rclone.org/hasher/
- If Needed: Transfer any existing local archive data to the remote volume first
rclone sync --fast-list --transfers 20 --progress /opt/archivebox/data/archive/ archivebox-s3:/data/archive
mv /opt/archivebox/data/archive /opt/archivebox/data/archive.localbackup
- Mount the remote storage volume as FUSE filesystem
rclone mount
--allow-other \ # essential, allows Docker to access FUSE mounts
--uid 911 --gid 911 \ # 911 is the default used by ArchiveBox
--vfs-cache-mode=full \ # cache both file metadata and contents
--transfers=16 --checkers=4 \ # use 16 threads for transfers & 4 for checking
archivebox-s3/data/archive:/opt/archivebox/data/archive # remote:local
See here for full more detailed instructions here: RClone Documentation: The rclone mount
command
Tip
You can use any RClone FUSE mounts as a normal volumes (bind mount) for Docker ArchiveBox, typically no storage plugin is needed as long as allow-other
is setup properly.
docker run -v $PWD:/data -v /opt/archivebox/data/archive:/data/archive
docker-compose.yml
:
services:
archivebox:
...
volumes:
- ./data:/data
- /opt/archivebox/data/archive:/data/archive
This is only needed if you are unable to Option A
for compatibility or performance reasons, or if you prefer defining your remote storage config in docker-compose.yml
instead of rclone.conf
.
See here for full instructions: RClone Documentation: Docker Plugin
- First, install the Rclone Docker Volume Plugin for your CPU architecture (e.g.
amd64
orarm64
):
docker plugin install rclone/docker-volume-rclone:amd64 --grant-all-permissions --alias rclone
ln -sf ~/.config/rclone/rclone.conf /var/lib/docker-plugins/rclone/config/rclone.conf
docker-compose.yml
:
services:
archivebox:
volumes:
- ./data:/data
- archivebox-s3:/data/archive
volumes:
archivebox-s3:
driver: rclone
driver_opts:
remote: 'archivebox-s3/data/archive'
allow_other: 'true'
vfs_cache_mode: full
poll_interval: 0
uid: 911
gid: 911
transfers: 16
checkers: 4
To start the container and verify the filesystem is accessible within it:
docker compose run archivebox /bin/bash 'ls -lah /data/archive/ | tee /data/archive/.write_test.txt'
---