Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible deprecation of docker-ipv6nat #65

Open
robbertkl opened this issue Nov 30, 2020 · 53 comments
Open

Possible deprecation of docker-ipv6nat #65

robbertkl opened this issue Nov 30, 2020 · 53 comments

Comments

@robbertkl
Copy link
Owner

With the merge of moby/libnetwork#2572 we're finally 1 step closer to having IPv6 NAT built into Docker!

I'm creating this issue to track the release of this feature, and to figure out if there are any remaining use cases for this tool. If not, we can deprecate this tool in favor of the built-in functionality.

@robbertkl robbertkl pinned this issue Nov 30, 2020
@bephinix
Copy link
Contributor

@robbertkl I think we should keep it up until built-in IPv6 NAT is rolled out for most distributions.
In addition to this, it is required to check if built-in IPv6 NAT behaves the same way docker-ipv6nat does. 😉

@robbertkl
Copy link
Owner Author

Exactly, agree 100%! I wanted to use this issue to share findings on behavior of built-in IPv6 NAT. After confirming this tool is no longer needed, I wanted to deprecate it with a README message, but still keep it available until the built-in IPv6 NAT is widespread.

@bboehmke
Copy link

bboehmke commented Dec 1, 2020

We should also track moby/moby#41622 because this is the requirement to enable the IPv6 NAT in the docker daemon.

Many thanks also for the great work on this project, it has made my work with IPv6 and docker much easier.

@J0WI
Copy link

J0WI commented Dec 10, 2020

Docker 20.10 with IPv6 NAT is out but it has some serious issues: moby/moby#41774

@johntdavis84
Copy link

I was actually coming here to open a ticket about this very thing. :)

The latest stable update on Manjaro included Docker 20.10, and I saw the new ipv6nat functionality--and read the long thread of people trying to figure out exactly how it should work, here: moby/moby#41622

It sounds like it's very much still experimental? I'm not sure how to check whether a feature is considered experimental or not?

In the meantime, if we've been using docker-ipv6nat without issue, can we just continue as we were, or will the new built-in tools break it? I'd prefer not to switch until it's had at least a few months for the most critical bugs to be worked out.

(It's also amazing to me--in a good way--that the official Docker release is implementing IPv6 NAT after months/years of philosophical pushback about that NAT'ing IPv6 being Wrong®. Maybe it is in most contexts, but it's clearly the best way to go in Docker, given how seamless v4 NAT'ing is with containers.

Thanks for all your work on this. I could never have used IPv6 before this point on docker without your work. :)

@robbertkl
Copy link
Owner Author

I have no intentions of pulling the plug until we can all agree Docker offers the same functionality (and stability). Of course, I'll be hesitant to add new features to docker-ipv6nat when it might be deprecated "soonish". We're keeping an eye on the development within Docker, and currently have no reason to think it will break docker-ipv6nat if you keep it disabled. Thanks for the support @johntdavis84 !

@bboehmke
Copy link

bboehmke commented Jan 5, 2021

Finally with release 20.10.2 the upstream IPv6 NAT seems to work now.

If you want to give it a try simply add the following lines to the /etc/docker/daemon.json:

{
  "experimental": true,
  "ip6tables": true
}

and configure the IPv6 the same way as for this container (see https://github.com/robbertkl/docker-ipv6nat#docker-ipv6-configuration)

Note: The ipv6nat container should not be running if ip6tables in docker daemon is enabled

@J0WI
Copy link

J0WI commented Jan 6, 2021

There's a regression in 20.10.2:
moby/moby#41858
moby/libnetwork#2607

@johntdavis84
Copy link

johntdavis84 commented Jan 6, 2021 via email

@robbertkl
Copy link
Owner Author

No collaborating, I think they rolled it from scratch. That makes most sense, as they can mirror the internal workings of the IPv4 NAT. Docker-ipv6nat is set up as an external listener, so doesn't make much sense to draw from this codebase.

I agree that it seems they're very much on top of things. Since the decision was made to make it part of Docker, they're taking it seriously.

@johntdavis84
Copy link

johntdavis84 commented Jan 6, 2021 via email

@fnkr
Copy link

fnkr commented Jan 6, 2021

Has anyone tried enabling IPv6 NAT for the default bridge network? In my case dockerd tries to execute a wrong command and crashes. Reported it here: moby/moby#41861

@thedejavunl
Copy link

Hi all,

With Docker 20.10.6 the ipv6nat function is fully intergrated (experimental).
You can add the following flags to your daemon.json:
{ "ipv6": true, "fixed-cidr-v6": "fd00::/80", "experimental": true, "ip6tables": true }

@johntdavis84
Copy link

Hi all,

With Docker 20.10.6 the ipv6nat function is fully intergrated (experimental).
You can add the following flags to your daemon.json:
{ "ipv6": true, "fixed-cidr-v6": "fd00::/80", "experimental": true, "ip6tables": true }

Thanks for the update. How does this compare to the earlier updates that enabled/tweaked IPv6 NAT? Is it considered feature complete now/lacking known bugs?

I found this in the release notes:

Networking
Fix a regression in docker 20.10, causing IPv6 addresses no longer to be bound by default when mapping ports moby/moby#42205
Fix implicit IPv6 port-mappings not included in API response. Before docker 20.10, published ports were accessible through both IPv4 and IPv6 by default, but the API only included information about the IPv4 (0.0.0.0) mapping moby/moby#42205
Fix a regression in docker 20.10, causing the docker-proxy to not be terminated in all cases moby/moby#42205
Fix iptables forwarding rules not being cleaned up upon container removal moby/moby#42205

@bboehmke
Copy link

The docker versions between 20.10.2 and 20.10.6 had some regressions with the user land proxy.
This issues are now solved and the daemon should work exactly as before with disabled ip6tables.

Until now there are no know bugs for the IPv6 handling anymore. (At least non that I am aware of).

I already used version 20.10.2 in a semi productive setup without any issues (with disabled user land proxy).

@johntdavis84
Copy link

johntdavis84 commented Apr 13, 2021 via email

@Rycieos
Copy link

Rycieos commented Apr 14, 2021

I can confirm that Docker 20.10.6's ipv6nat implementation works, and it seems to work exactly like how this container was doing it. The only difference I have seen is that the docker ps command now shows that the ports are mapped for both IPv4 and IPv6. The downside being that "experimental" mode needs to be turned on.

@bephinix
Copy link
Contributor

Let's keep this issue open until NAT for IPv6 is available in upstream docker without experimental mode. 👍

@chesskuo
Copy link

chesskuo commented Jun 26, 2021

now (20.10.7), I am using this experimental feature with docker-compose and it work perfectly!

@fnkr
Copy link

fnkr commented Jul 20, 2021

@chesskuo How do I make this work with docker-compose stacks (which use custom bridge networks)? My containers only get IPv4 addresses unless I use the default bridge network.

@Rycieos
Copy link

Rycieos commented Jul 20, 2021

How do I make this work with docker-compose stacks (which use custom bridge networks)? My containers only get IPv4 addresses unless I use the default bridge network.

You need to define an IPv6 subnet for the network:

networks:
  network:
    driver: bridge
    enable_ipv6: true
    ipam:
      config:
        - subnet: fd00:abcd:ef12:1::/64
        - subnet: 10.1.0.0/16

@johntdavis84
Copy link

johntdavis84 commented Jul 20, 2021 via email

@chesskuo
Copy link

@fnkr

this is my network part of docker-compose.yml:

networks:
  traefik:
    name: traefik
    attachable: true
    enable_ipv6: true
    ipam:
      config:
        - subnet: 172.100.0.0/24
          gateway: 172.100.0.254
        - subnet: fd00:dead:beef::/112
          gateway: fd00:dead:beef::254

@romansavrulin
Copy link

romansavrulin commented Jun 15, 2022

Does this feature work with Docker Desktop v20.10.14 for Mac? I'm unable to connect to ipv6 hosts or ping it from the inside of the container, even if I put

  "ipv6": true,
  "fixed-cidr-v6": "fd00::/80",
  "experimental": true,
  "ip6tables": true

to the config

@robbertkl
Copy link
Owner Author

Does this feature work with Docker Desktop v20.10.14 for Mac?

I don't think it will. Docker for Mac runs in a virtual machine (xhyve), not directly in macOS.

@A1bi
Copy link

A1bi commented Oct 28, 2022

Something I noticed: If you use a ULA prefix for fixed-cidr-v6 like fd00::/80, everything inside your container will still prefer IPv4 over IPv6 unless you force it to use IPv6. For example if you ping or curl (without the -6 flag) dual stack hosts, it will talk to them via IPv4. Kinda a dealbreaker for me.

I guess the OS is smart and knows that a ULA address isn't supposed to be able to talk to a global address and therefore doesn't even try to in the first place.

So then I tried it with the designated documentation prefix 2001:db8::/32 which technically isn't a ULA prefix but also not globally routed. And it did fix the problem. 🎉 I don't know whether this is a bad idea, but I don't see how this should hurt anything if it's behind a NAT anyway.

@guysoft
Copy link

guysoft commented Oct 28, 2022

@A1bi That explains for me what is going on with #78

@jsravn
Copy link

jsravn commented Mar 9, 2023

FYI I explained why it prefers ipv4 here: #78 (comment). It's basically glibc ignoring the standard to support a particular configuration (site local ipv6 that uses public ipv4 for internet access). This unfortunately breaks ipv6 NAT by default.

@Trufax
Copy link

Trufax commented Apr 18, 2023

Just a question, is this container still required for Docker IPv6 NAT or is it enough now to enable a ULA via the daemon.json as defined in the Docker Docs and the containes will have internal IPv6 addresses ?

@J0WI
Copy link

J0WI commented Apr 20, 2023

Just a question, is this container still required for Docker IPv6 NAT or is it enough now to enable a ULA via the daemon.json as defined in the Docker Docs and the containes will have internal IPv6 addresses ?

The upstream implementation in dockerd is sufficient.

@netphils
Copy link

Something I noticed: If you use a ULA prefix for fixed-cidr-v6 like fd00::/80, everything inside your container will still prefer IPv4 over IPv6 unless you force it to use IPv6. For example if you ping or curl (without the -6 flag) dual stack hosts, it will talk to them via IPv4. Kinda a dealbreaker for me.

I guess the OS is smart and knows that a ULA address isn't supposed to be able to talk to a global address and therefore doesn't even try to in the first place.

So then I tried it with the designated documentation prefix 2001:db8::/32 which technically isn't a ULA prefix but also not globally routed. And it did fix the problem. 🎉 I don't know whether this is a bad idea, but I don't see how this should hurt anything if it's behind a NAT anyway.

Actually, you shouldn't use Global Unicast Addresses under NAT66. It's against RFC4193. Something similar as manipulating gai.conf could be better.

@polarathene
Copy link

polarathene commented May 30, 2023

For example if you ping or curl (without the -6 flag) dual stack hosts, it will talk to them via IPv4. Kinda a dealbreaker for me.

When is that an actual deal breaker though? That would only be happening on the docker host or from containers, not remote clients connecting.

is it enough now to enable a ULA via the daemon.json as defined in the Docker Docs and the containers will have internal IPv6 addresses ?

Yes, for the default bridge at least you can apply the config shown earlier.

You can set docker0 default bridge to support IPv6 with fixed-cidr-v6: "fd00:feed:face:f00d::/64" + ipv6: true, while user-defined networks with docker network create or by docker-compose need to also opt-in for IPv6 support and provide an IPv6 subnet, these don't require the daemon.json settings that only apply to docker0 bridge.

You may not need to assign IPv6 addresses to containers though (at least for ULA), as IPv4 for the containers network should work fine if your host has a single IPv6 address that you want to publish ports to like you would with IPv4, this will work properly (preserve the remote client IP) if you enable ip6tables: true + experimental: true in daemon.json. You'll want that regardless of the containers having an IPv6 network if the host IPv6 address will get published container ports (which is the case by default).

If you were to actually need publicly routable IPv6 addresses to each container, that can be done. Port publishing if used with IPv4 will still publish to host interfaces by default including the assigned IPv6 address to the docker host. While regardless of published ports IPv6 GUA addresses would be reachable unless you have a firewall active and specifically allow traffic through for those (port publishing bypasses firewalls though, so that may not be the case 😅 )

IPv6 GUA is also a bit more complicated if you've got a /64 block that is not routed, you assign a portion of that to your docker network and the container addresses are assigned incrementally, no DHCP/SLAAC used to make them publicly routable. Instead you'd need to manage the NDP proxy table, there's also a few gotchas you can run into with that depending on environment which might make it appear unstable at persisting the entries in the proxy table. When that is setup correctly, your public IPv6 interface on the docker host will respond to remote clients and route them to the container via NDP. Using IPv6 ULA network with NAT (ip6tables: true) is much simpler.


Just as a follow-up to my comment above, I found the problem to be the default policy set on the FORWARD CHAIN, which was set to DROP therefore rendering all routing useless.

At least with v23 of Docker the findings reported earlier seem incorrect.

The FORWARD default policy should be set to DROP for quite a few years now. This is to prevent another host on the LAN (eg: connected to wifi at cafe or airport) from being able to access any container port (or other networks of the docker host IIRC, such as corporate VPN).

This can happen because Docker enables ip_forward=1 (kernel network setting, disabled by default usually), thus to prevent that vulnerability it sets DROP, unless ip_forward=1 was already set IIRC (in which case it wouldn't touch the FORWARD default policy I think).

If UFW is active, that also modifies some default polices like setting INPUT to DROP.

It's possible that while the commenter was experimenting between the two, these conditional behaviours applied causing the mismatch depending on how they approached the comparison. Or it's possible there was a difference with ip6tables: true in the version, and that's since been addressed.

Likewise the DOCKER-USER chain is present for ip6tables: true.


We would prefer if Docker would automatically assign IPv6 subnets to networks, like it does for IPv4.

This can be done, you need to edit default-address-pools in daemon.json to include IPv6 address pools. Then you can use docker network create --ipv6 ... or compose enable_ipv6: true without specifying subnets and it should assign one from the default pool instead.

Personally until there is an official default pool, it's probably more portable to provide an IPv6 subnet explicitly than require someone to modify the default pools, as you need to declare the IPv4 ones too.

There's also presently a bug with excessive memory usage if your IPv6 subnets in a pool would be many (eg: millions / billions) vs the 31 you get for IPv4 by default. That'll be resolved once they support initializing pools lazily on-demand.

@robbertkl
Copy link
Owner Author

Hi all! The above comment is a great overview of the current state and issues/workarounds. There is also pretty good documentation available from Docker here.

It seems all is well for built-in IPv6 NAT, and I'm glad the folks at Docker eventually embraced this, even if it goes against the nature of IPv6. The last "hurdle" is to take this out of experimental. Unfortunately, I have no idea for when this is planned 🤷🏻.

Will keep this issue open in the meantime. No work is done on docker-ipv6nat, but of course the latest release should still be working.

@LeVraiRoiDHyrule
Copy link

Hi, I'm quite a noob about ipv6 support on Docker. I am currently using docker-ipv6nat, and it is working great. Would you recommend switching to docker built in ipv6 support ?

@polarathene
Copy link

Would you recommend switching to docker built in ipv6 support ?

Yes, you shouldn't need it anymore.

See the official Docker docs page for IPv6 (linked in the comment above yours). You may also like these IPv6 docs I wrote for docker-mailserver.

@robbertkl
Copy link
Owner Author

I'm actually in the process of moving away myself, in favor of built-in (experimental) IPv6 support. I've been running a test setup (with docker-mailserver as well 😉) for a while and have not run into any IPv6-related issues. The docs that @polarathene mentions above are great indeed!

@LeVraiRoiDHyrule
Copy link

LeVraiRoiDHyrule commented Mar 12, 2024

I'm actually in the process of moving away myself, in favor of built-in (experimental) IPv6 support. I've been running a test setup (with docker-mailserver as well 😉) for a while and have not run into any IPv6-related issues. The docs that @polarathene mentions above are great indeed!

Thanks ! I followed the mailserver docs and it is working. But the problem I am having is that the client ip is replaced by the docker subnet. It is a problem because it means I can no longer ban clients based on their ip. It is exactly what is said here:

image

Mailserver doc says that:

This can be fixed by enabling a Docker network to assign IPv6 addresses to containers, along with some additional configuration. Alternatively you could configure the opposite to prevent IPv6 connections being made.

I suppose what they are talking about is this https://docker-mailserver.github.io/docker-mailserver/v13.3/config/advanced/ipv6/#configuring-an-ipv6-subnet , which I've done because my config is the following:

networks:
  services:
    name: services
    enable_ipv6: true
    ipam:
      driver: default
      config:
        - subnet: ${SERVICES_NETWORK_IP4}.0/24
        - subnet: fd00:cafe:face:feed::/64

But I'm still getting incorrect remote IP from my containers, all ips look like fd00:beef.

EDIT:

It looks like the subnet is not taken into account:
image

@robbertkl
Copy link
Owner Author

@LeVraiRoiDHyrule:
Docker has a default IPv4 pool from which it assigns new subnets for each network. You can configure Docker to also include an IPv6 pool, so you won't have to assign it a subnet in compose network definitions, just enable IPv6.

To give you an example, this is my config for the Docker daemon (e.g. /etc/docker/daemon.json):

{
  "experimental": true,
  "ipv6": true,
  "ip6tables": true,
  "fixed-cidr-v6": "fd00:d0ca::/112",
  "default-address-pools": [
    { "base": "172.17.0.0/16", "size": 20 },
    { "base": "172.18.0.0/15", "size": 20 },
    { "base": "172.20.0.0/14", "size": 20 },
    { "base": "172.24.0.0/13", "size": 20 },
    { "base": "192.168.0.0/16", "size": 24 },
    { "base": "fd00:d0ca::/104", "size": 112 }
  ]
}

This gives you a default network with IPv6 support (and IPv6 NAT), and plenty of IPv4 and IPv6 subnets for a lot of additional docker networks.

In your compose.yaml / docker-compose.yml you can then simply do:

networks:
  services:
    name: services
    enable_ipv6: true

Let me know if this helps!

@LeVraiRoiDHyrule
Copy link

LeVraiRoiDHyrule commented Mar 12, 2024

Thanks a lot for your answer. I didn't understand why the docs says this:

If you've configured IPv6 address pools in /etc/docker/daemon.json, you do not need to specify a subnet explicitly. Otherwise if you're unsure what value to provide, here's a quick guide (Tip: Prefer IPv6 ULA, it's the least hassle):

I thought I had the choice between specifying a subnet in my docker network or creating a pool in daemon.json. I would prefer to do it in the compose instead. Is it not doable if I want my container to get the real client ip ?

I have recreated the ipv6 network with fd00:cafe:face:feed::/, and now the containers are getting this instead of the client ip so it's still the same issue.

My daemon.json is currently

{
  "ip6tables": true,
  "experimental" : true,
  "userland-proxy": true
}

as it is told in the docs.

@robbertkl
Copy link
Owner Author

You're right, this is just an alternative way. It should also work when manually specifying subnets instead of using the address pool.

I just wanted to share my config (perhaps you can spot some differences with your setup) and wanted to provide some context as I'm using default address pools.

@robbertkl
Copy link
Owner Author

What happens if you manually create a network (docker network create) with IPv6 enabled and an IPv6 subnet and then docker inspect it? And then what happens if you use that network from you compose (as an external network).

@LeVraiRoiDHyrule
Copy link

LeVraiRoiDHyrule commented Mar 12, 2024

You're right, this is just an alternative way. It should also work when manually specifying subnets instead of using the address pool.

I just wanted to share my config (perhaps you can spot some differences with your setup) and wanted to provide some context as I'm using default address pools.

That's weird, I am having the exact problem told in the docs, followed their configuration, but still get the issue.

Can you confirm that what they mean by

This can be fixed by enabling a Docker network to assign IPv6 addresses to containers, along with some additional configuration

Is what is told by the https://docker-mailserver.github.io/docker-mailserver/v13.3/config/advanced/ipv6/#enable-proper-ipv6-support section ?

Why don't you have "userland-proxy": true in your daemon.json ? Do you think it has something to see with my problem ?

What happens if you manually create a network (docker network create) with IPv6 enabled and an IPv6 subnet and then docker inspect it? And then what happens if you use that network from you compose (as an external network).

I tried creating a new one, and I confirm each container get an ipv6 address. For example "IPv6Address": "fd00:cafe:face:feed::2/64". Then, if I use this container, what it sees from the client is fd00:cafe:face:feed::1

@saltydk
Copy link

saltydk commented Mar 12, 2024

Yes, the userland-proxy replaces the source IP. docker/docs#17312

@LeVraiRoiDHyrule
Copy link

Yes, the userland-proxy replaces the source IP. docker/docs#17312

Oh, so this may be my problem. Is there downsides of turning it off (besides having to assign ipv6 to each container, which is already the case on this network) ?

@robbertkl
Copy link
Owner Author

robbertkl commented Mar 12, 2024

Actually if you check the very last line of this comment docker/docs#17312 (comment), you can see it should still preserve the original client IP when ip6tables is enabled together with the userland proxy.

In fact, I believe my machine is using the userland proxy as well. I don't have "userland-proxy": true in my config, but it's actually enabled by default according to the docs. EDIT: just checked my machine and there are many /usr/libexec/docker/docker-proxy processes running, so indeed I have it enabled.

So in my case it works just fine with the userland proxy, so this may not be your issue.

@saltydk
Copy link

saltydk commented Mar 12, 2024

Yeah, it is enabled by default, we disable it in our project but I suspect the original reason was due to performance.

@LeVraiRoiDHyrule
Copy link

LeVraiRoiDHyrule commented Mar 12, 2024

You are both right. It had nothing to see with anything of that. For an obscure reason, my daemon.json got entirely deleted so ipv6 was half-working, which caused client ip to be masked. userland-proxy indeed preserves client ip as I have now restored my daemon.json and it all works perfectly. I'm very sorry for bothering you with false lead.

I now need to find out what could have caused this, but that's an entirely different issue I will search for myself.

@saltydk
Copy link

saltydk commented Mar 12, 2024

If you install docker with a package manager or Nvidia container toolkit thing they have had instances of outright replacing that file in the past.

@robbertkl
Copy link
Owner Author

and it all works perfectly. I'm very sorry for bothering you with false lead.

No worries, happy to help and good to hear it's working now!

@polarathene
Copy link

Bit of a big reply, and mostly redundant now since the cause was found, but maybe below has some value for others as reference 😅

Misconfigured IPv6

userland-proxy: true + ip6tables: false (defaults in daemon.json)

I tried creating a new one, and I confirm each container get an ipv6 address. For example "IPv6Address": "fd00:cafe:face:feed::2/64". Then, if I use this container, what it sees from the client is fd00:cafe:face:feed::1

This would have been due to the daemon.json misconfig issue you discovered. ip6tables: true + experimental: true is required for the proper routing rules to be applied by Docker when userland-proxy: true is active.

Since an IPv6 address was assigned to the container, the proxy routed to that through the IPv6 network gateway IP. It's the same behaviour as you'd get with the IPv4 one, and IIRC is userland-proxy: true trying to be helpful since there is no proper IPv6 routing rules in place (ip6tables).

Once you had that correctly applied, it worked properly. Doesn't seem like I address that in the DMS docs, but it is a gotcha I recall experiencing too when I was researching this. I should probably mention that caveat, so thanks for bringing it to my attention 👍

Correctly applying changes to daemon.json

I now need to find out what could have caused this

Docker v25 change with IPv6 (edit: collapsed as unlikely)

What version of Docker are you running on?

v25 introduced a change (and maybe they backported to v23/v24) which will automatically/implicitly enable IPv6 in daemon config if a container assigns an IPv6 address I think. EDIT: Seems related to setting ipv6: true per network with an IPv6 subnet assigned, daemon.json has an ipv6 setting for the legacy bridge docker0 (typically not relevant to compose.yaml)

Perhaps there was a bug there from that change? 🤷‍♂️ I haven't tried it and I'm not sure if it explicitly updates the daemon.json config file, or if it's just a runtime thing. EDIT: It was reverted for Docker v25.03 (Feb 2024) as it mistakenly enforced IPv6 with networks in compose.yaml that set enable_ipv6: false.

Other than that, you have to make sure you run systemctl restart docker to apply the changes made to daemon.json, a reload isn't sufficient IIRC.

image

daemon.json replaced when associated package updates

they have had instances of outright replacing that file in the past.

If that happens, you can use a systemd unit drop-in override to provide your own command to run dockerd which points to your own daemon.json file. That should avoid any surprises there?


userland-proxy: true

Enabled (default) vs Disabled

Is there downsides of turning it off

There are some differences in the routing, the referenced docs issue of mine details these here:

image

image

image

Which I later try to present via a table for comparison (not entirely complete, lost a latter revision and didn't have time to repeat testing):

image

So it really depends on if you need containers to talk to each other or between the host. Some of those are resolvable and I shared how to adjust the rules but I don't think any of them were since upstreamed. If you only have remote connections where client IP is a concern, then it's really only the IPv6 one which userland-proxy: false resolves.

May eventually be disabled by default + future replacement for iptables / ip6tables

it's actually enabled by default according to the docs.

Yes userland-proxy: true is default and has been for a long time.

There is a separate issue for tracking when that'll be changed, but the Docker devs want to minimize the differences I documented above AFAIK. I don't think they can keep the IPv6 localhost routing without userland-proxy: true however, so if anyone relied on that for some reason it'd be a breaking change.

There's also an alternative approach with a new networking mode/driver, which is intended to replace iptables based routing in future as a new default AFAIK. Not sure if much progress has been made on that since it's announcement.

References:

Performance boost with userland-proxy: false

we disable it in our project but I suspect the original reason was due to performance.

For @LeVraiRoiDHyrule this is another difference, skipping the proxy will avoid some overhead which improves throughput if you have a need for network I/O and want it to match performance outside of a container better. I recall a iperf3 test doubled in throughput with userland-proxy: false.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests