Thanks for the reply, the public ipv4 is no longer reachable from the outside, only v6.
When I issue docker compose down on cow stack via ipv6 ssh session, ping relys to ipv4 are returning. The strange thing is it takes minutes to happen after container start.
English
mailcow behind jwilder reverse proxy loosing IPv4 access to server
- Edited
DocSnyd3r Sorry this is then not a topic of mailcow, but of your jwilder reverse proxy, which is not supported…
Or its something with your hosting. Check if you can connect to 443, 25 and so on from within your host running docker.
Also there is no mailcow network and I found the ip range on this bridge
11: br-a12b81188341: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:c5:3d:ba:1d brd ff:ff:ff:ff:ff:ff
inet 172.22.0.1/16 brd 172.22.255.255 scope global br-a12b81188341
valid_lft forever preferred_lft forever
I do not get why /16 is the default for docker, seems liek a huge waste. Initialle cowmail had another IP space which was already in use by another container.
docker network ls
NETWORK ID NAME DRIVER SCOPE
f989234d177f bridge bridge local
0646c9ac1dd8 host host local
3610dce0a4e3 nextcloud_backend bridge local
a12b81188341 nextcloud_default bridge local
f4bf5f8b511b nginx-internal bridge local
63023c7048a6 nginx_default bridge local
9eac7f4c98a5 none null local
17c0ad5a0c05 redir_default bridge local
ae52cbd9668a ts3_default bridge local
f076a5fa04b5 watchtower_default bridge local
- Edited
i do not have it in my centos7, so is there a reason Alma 8 is mentioned on the page but not Alma 9, only Rocky 9?
The only strange thing I encounteres was:
docker info | grep selinux
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
[mailcow-dockerized]$ cat /etc/docker/daemon.json
{
“selinux-enabled”: true
}
So docker has no selinux but the file is created and system had a clean boot.
rpm -qa | grep container-selinux
container-selinux-2.119.2-1.911c772.el7_8.noarch
Yes, it is installed
- Edited
I’ve the exact same issue. I’ve been running mailcow on Debian 11 for years now. Recently I had to reinstall the server (not only upgrade) with Debian 12. Since then, after a random time (at least once a day), the IPv4 of my host stop answering from my ISP. It works from other ISPs. I see a SYN, but no SYN ACK sent back. Can’t ping, can’t SSH, can’t access 80/443, etc. Like you, IPv6 is OK. As soon as I stop the mailcow containers with docker compose down, everything is back to normal.
It happened again today. I took time to restart containers one by one. The issue is fixed as soon as I restart the netfilter container.
Strange, I am running on Debian 12 (upgraded from 11) and have no issues.
I would guess that probably your hoster is the cause for this. I am running on virtualized on my homeserver.
Or it could be the dual stack IPv4 and IPv6 which gives netfilter a problem.
I am using only IPv4, but have not explicitly disabled IPv6 in mailcow.
esackbauer what should I do to troubleshoot the netfilter container and help the community to fix this issue?
- Edited
I found the reason on my side, finally. Both my dedicated server & my ISP have dual stack (IPv4 + IPv6).
IPv6 has the priority when connection can be made on both.
A device on my home network (my wife’s phone) was trying to login to IMAP and for an unknown reason yet, failed to do so. After a few attempts, the IPv6 is banned. As each devices at home have a different IPv6, I don’t notice it on my side. But because the IPv6 of the phone is banned, it then tries to login to IMAP using IPv4. And at this moment, after a few attempts, my public IPv4 is banned. I don’t understand why banning my IPv4 in the netfilter container also blocks my IPv4 on the host itself, but now I understand the logic. I’ll investigate on the phone.