You are correct, this is a wierd setup… took me 10 days to build it… but I manage to use my old hardware and do not pay $20 montly for the rest of my life… also learned some skills and if I want to grow over 1TB, there is no limits for it and i can save much more than $20 montly in the future scenarios… also I can control my info, etc…
Anyways, i figured it out, i just override the docker-compose.yml with
'''
volumes:
vmail-vol-1:
driver_opts:
type: none
device: /symlink/vmail
o: bind
vmail-index-vol-1:
driver_opts:
type: none
device: /symlink/vmail-index
o: bind
[…] for all other volumes
'''
then just switch the symbolic link pointer to local_disk or external_disk based on iSCSI state…
So on local_disk i just have a minimal setup for Mailcow as cache and when external_disk is plugged in to the cloud, it has the full state and get rsync -a from the local_disk i used as cache when external_disk was offline
What I need to figure it out now is which of those micro-services volumes i must rsync. As i am testing for sending and incomming emails from the accounts, i just need to rsync vmail volume… Are any other microservice volume needed/critical for getting all the exchange the Mailcow server did while external_disk was offline?
ps.: I have tested just rsync vmail from the local_disk 50GB cache to my external_disk 1TB main storage and turned out to work properly… But I am wondering if i will face any problematic issue like indexing or something idk in the long run…