Nextcloud has several open issues (#30762, #29841) regarding its inability to clean up left over chunks from bad or canceled uploads, when using S3 object storage as the primary backend.
The script itself is extremely simple and will clean up all chunks that are older than a certain time limit.
As I'm running Nextcloud on Scaleway Kubernetes with Scaleway's S3 compatible Object Storage, this script is optimized for my usecase but I will happily accept pull requests to make it more versatile and work with other S3 compatible object storage providers.
The script was written for my own usecase and therefore currently only works with the following:
- MySQL database (or MariaDB)
- Nextcloud version 15 or higher
- Docker, Kubernetes or local PHP 8+
To make the script as simple as possible, it's bundled in a Docker image that can be run either standalone or in Kubernetes.
This is designed to be run as a cronjob, e.g. every 60 minutes.
Simply copy the .env.example
file to a new .env
file and adjust the values
to your setup. Make sure the location that you are running the script from can access both the
S3 bucket and the Nextcloud database (Firewall rules, IP blocking, etc.).
Then run:
$ docker run --rm --env-file=.env otherguy/nextcloud-cleanup:latest
Found 0 left over files.
Recovered 0 B from S3 storage.
Edit the kubernetes-secret.yml
file and add the correct environment
variables for your database credentials and your S3 bucket details.
Then, apply the secret and the CronJob to the cluster:
$ kubectl apply --validate -f kubernetes-secret.yml -f kubernetes-cronjob.yml
secret/nextcloud-cleanup-config configured
cronjob.batch/nextcloud-cleanup configured