Hi, I’m currently testing mailcow / docker running from a ZFS dataset. If I understand correclty I can create a full backup using the supplied helper script backup_and_restore.sh. But doing this for a large mailbox would make a full copy everytime a backup is done so for example for 100 days backup you’ll have a 100 times your data copied to the backup location. I think that a more efficient way is to just make a zfs snapshot of the docker dataset and copy it to a backup zfs dataset for incremental and efficient backup. The problem is that docker creates zillions of datasets and sub-datasets and I’m not really sure which one should I snapshot and copy. And advice? And does this look like a good backup strategy? I admint I’m new to docker so maybe I’m missing something.