Forward traffic on same interface The 2019 Stack Overflow Developer Survey Results Are InpfSense with a bridge as a LAN interface : traffic blocked between interfacesEnable ping to pass through pfSensePing reply not getting to LAN machines but getting in Linux router GatewaypfSense routing between subnets behind OPT1 and LANOpenVPN Site-to-Site Routing WoespfSense Routing a packet out the same interface it arrived onrouting for iax-protocol does not work on pfsenseRoute traffic through private IP for only certain hosts - CentOS 6.6PFSense: For specific IP address, route traffic to internal hostHow do I connect a docker container to multiple ips on the same macvlan network?
Could JWST stay at L2 "forever"?
What is the motivation for a law requiring 2 parties to consent for recording a conversation
What do the Banks children have against barley water?
What is a mixture ratio of propellant?
Are there any other methods to apply to solving simultaneous equations?
Can I write a for loop that iterates over both collections and arrays?
Time travel alters history but people keep saying nothing's changed
Is "plugging out" electronic devices an American expression?
Can we apply L'Hospital's rule where the derivative is not continuous?
Springs with some finite mass
In microwave frequencies, do you use a circulator when you need a (near) perfect diode?
How can I create a character who can assume the widest possible range of creature sizes?
Evaluating number of iteration with a certain map with While
Deadlock Graph and Interpretation, solution to avoid
Where does the "burst of radiance" from Holy Weapon originate?
It's possible to achieve negative score?
How to manage monthly salary
Can't find the latex code for the ⍎ (down tack jot) symbol
How are circuits which use complex ICs normally simulated?
Does a dangling wire really electrocute me if I'm standing in water?
What are the motivations for publishing new editions of an existing textbook, beyond new discoveries in a field?
Is domain driven design an anti-SQL pattern?
Realistic Alternatives to Dust: What Else Could Feed a Plankton Bloom?
Why can Shazam do this?
Forward traffic on same interface
The 2019 Stack Overflow Developer Survey Results Are InpfSense with a bridge as a LAN interface : traffic blocked between interfacesEnable ping to pass through pfSensePing reply not getting to LAN machines but getting in Linux router GatewaypfSense routing between subnets behind OPT1 and LANOpenVPN Site-to-Site Routing WoespfSense Routing a packet out the same interface it arrived onrouting for iax-protocol does not work on pfsenseRoute traffic through private IP for only certain hosts - CentOS 6.6PFSense: For specific IP address, route traffic to internal hostHow do I connect a docker container to multiple ips on the same macvlan network?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;
I am operating a server hosting a set of services, each run in a separate Docker container. In addition, there is a KVM running pfSense acting as firewall. The firewall has a physical interface that is connected to the external network and a virtual network card that is connected to the internal container network, using MACVLAN Docker-side, so each container has its own IP address, but all of them are in the same subnet.
For security reasons, the containers need to be isolated and shall not be able to communicate with each other principally (just with the external network). For this, MACVLAN is configured in VEPA mode, which allows traffic from and to the parent device, but not to other addresses on the same parent device.
Now, I would like to allow specific traffic between specific containers, so pfSense must route the traffic to the same interface it received the traffic, considering the configured firewall rules (read, if incoming traffic on the internal interface matches a PASS rule it shall be forwarded to a host on the same interface / same subnet).
I can't seem to get that scenario working (no traffic between the hosts on the internal interface, traffic from and to the external network works as expected). Any ideas on how to proceed from here?
Is there any configuration item in FreeBSD in general or pfSense specifically the prevents such scenarios, like "filter traffic on own interface" or "in practice this should not happend because traffic is forwarded on the switch in front of the router because it is the same subnet so do nothing with it"?
Interestingly enough, pfSense does not even reply to the ARP request (which might have the same reason):
[root@server ~]# ip r
default via 10.0.20.1 dev server proto static metric 410
10.0.20.0/24 dev server proto kernel scope link src 10.0.20.2 metric 410
21:52:49.651286 ARP, Request who-has 10.0.20.4 tell 10.0.20.2, length 28
21:52:50.673895 ARP, Request who-has 10.0.20.4 tell 10.0.20.2, length 28
21:52:51.697860 ARP, Request who-has 10.0.20.4 tell 10.0.20.2, length 28
21:52:52.721992 ARP, Request who-has 10.0.20.4 tell 10.0.20.2, length 28
I would assume a response with the MAC of the 10.0.20.0/24 interface. The trace was made on the firewall on that interface (PING from the firewall to 10.0.20.4 works as expected).
When adding the entry manually I can see the ICMP echo request, but no reply:
[root@server ~]# arp -s 10.0.20.4 02:42:0a:00:14:04
10.0.20.4 ether 02:42:0a:00:14:04 CM server
22:00:21.403515 IP 10.0.20.2 > 10.0.20.4: ICMP echo request, id 5622, seq 1, length 64
22:00:22.450162 IP 10.0.20.2 > 10.0.20.4: ICMP echo request, id 5622, seq 2, length 64
22:00:23.473790 IP 10.0.20.2 > 10.0.20.4: ICMP echo request, id 5622, seq 3, length 64
22:00:24.497803 IP 10.0.20.2 > 10.0.20.4: ICMP echo request, id 5622, seq 4, length 64
routing docker kvm-virtualization freebsd pfsense
New contributor
add a comment |
I am operating a server hosting a set of services, each run in a separate Docker container. In addition, there is a KVM running pfSense acting as firewall. The firewall has a physical interface that is connected to the external network and a virtual network card that is connected to the internal container network, using MACVLAN Docker-side, so each container has its own IP address, but all of them are in the same subnet.
For security reasons, the containers need to be isolated and shall not be able to communicate with each other principally (just with the external network). For this, MACVLAN is configured in VEPA mode, which allows traffic from and to the parent device, but not to other addresses on the same parent device.
Now, I would like to allow specific traffic between specific containers, so pfSense must route the traffic to the same interface it received the traffic, considering the configured firewall rules (read, if incoming traffic on the internal interface matches a PASS rule it shall be forwarded to a host on the same interface / same subnet).
I can't seem to get that scenario working (no traffic between the hosts on the internal interface, traffic from and to the external network works as expected). Any ideas on how to proceed from here?
Is there any configuration item in FreeBSD in general or pfSense specifically the prevents such scenarios, like "filter traffic on own interface" or "in practice this should not happend because traffic is forwarded on the switch in front of the router because it is the same subnet so do nothing with it"?
Interestingly enough, pfSense does not even reply to the ARP request (which might have the same reason):
[root@server ~]# ip r
default via 10.0.20.1 dev server proto static metric 410
10.0.20.0/24 dev server proto kernel scope link src 10.0.20.2 metric 410
21:52:49.651286 ARP, Request who-has 10.0.20.4 tell 10.0.20.2, length 28
21:52:50.673895 ARP, Request who-has 10.0.20.4 tell 10.0.20.2, length 28
21:52:51.697860 ARP, Request who-has 10.0.20.4 tell 10.0.20.2, length 28
21:52:52.721992 ARP, Request who-has 10.0.20.4 tell 10.0.20.2, length 28
I would assume a response with the MAC of the 10.0.20.0/24 interface. The trace was made on the firewall on that interface (PING from the firewall to 10.0.20.4 works as expected).
When adding the entry manually I can see the ICMP echo request, but no reply:
[root@server ~]# arp -s 10.0.20.4 02:42:0a:00:14:04
10.0.20.4 ether 02:42:0a:00:14:04 CM server
22:00:21.403515 IP 10.0.20.2 > 10.0.20.4: ICMP echo request, id 5622, seq 1, length 64
22:00:22.450162 IP 10.0.20.2 > 10.0.20.4: ICMP echo request, id 5622, seq 2, length 64
22:00:23.473790 IP 10.0.20.2 > 10.0.20.4: ICMP echo request, id 5622, seq 3, length 64
22:00:24.497803 IP 10.0.20.2 > 10.0.20.4: ICMP echo request, id 5622, seq 4, length 64
routing docker kvm-virtualization freebsd pfsense
New contributor
add a comment |
I am operating a server hosting a set of services, each run in a separate Docker container. In addition, there is a KVM running pfSense acting as firewall. The firewall has a physical interface that is connected to the external network and a virtual network card that is connected to the internal container network, using MACVLAN Docker-side, so each container has its own IP address, but all of them are in the same subnet.
For security reasons, the containers need to be isolated and shall not be able to communicate with each other principally (just with the external network). For this, MACVLAN is configured in VEPA mode, which allows traffic from and to the parent device, but not to other addresses on the same parent device.
Now, I would like to allow specific traffic between specific containers, so pfSense must route the traffic to the same interface it received the traffic, considering the configured firewall rules (read, if incoming traffic on the internal interface matches a PASS rule it shall be forwarded to a host on the same interface / same subnet).
I can't seem to get that scenario working (no traffic between the hosts on the internal interface, traffic from and to the external network works as expected). Any ideas on how to proceed from here?
Is there any configuration item in FreeBSD in general or pfSense specifically the prevents such scenarios, like "filter traffic on own interface" or "in practice this should not happend because traffic is forwarded on the switch in front of the router because it is the same subnet so do nothing with it"?
Interestingly enough, pfSense does not even reply to the ARP request (which might have the same reason):
[root@server ~]# ip r
default via 10.0.20.1 dev server proto static metric 410
10.0.20.0/24 dev server proto kernel scope link src 10.0.20.2 metric 410
21:52:49.651286 ARP, Request who-has 10.0.20.4 tell 10.0.20.2, length 28
21:52:50.673895 ARP, Request who-has 10.0.20.4 tell 10.0.20.2, length 28
21:52:51.697860 ARP, Request who-has 10.0.20.4 tell 10.0.20.2, length 28
21:52:52.721992 ARP, Request who-has 10.0.20.4 tell 10.0.20.2, length 28
I would assume a response with the MAC of the 10.0.20.0/24 interface. The trace was made on the firewall on that interface (PING from the firewall to 10.0.20.4 works as expected).
When adding the entry manually I can see the ICMP echo request, but no reply:
[root@server ~]# arp -s 10.0.20.4 02:42:0a:00:14:04
10.0.20.4 ether 02:42:0a:00:14:04 CM server
22:00:21.403515 IP 10.0.20.2 > 10.0.20.4: ICMP echo request, id 5622, seq 1, length 64
22:00:22.450162 IP 10.0.20.2 > 10.0.20.4: ICMP echo request, id 5622, seq 2, length 64
22:00:23.473790 IP 10.0.20.2 > 10.0.20.4: ICMP echo request, id 5622, seq 3, length 64
22:00:24.497803 IP 10.0.20.2 > 10.0.20.4: ICMP echo request, id 5622, seq 4, length 64
routing docker kvm-virtualization freebsd pfsense
New contributor
I am operating a server hosting a set of services, each run in a separate Docker container. In addition, there is a KVM running pfSense acting as firewall. The firewall has a physical interface that is connected to the external network and a virtual network card that is connected to the internal container network, using MACVLAN Docker-side, so each container has its own IP address, but all of them are in the same subnet.
For security reasons, the containers need to be isolated and shall not be able to communicate with each other principally (just with the external network). For this, MACVLAN is configured in VEPA mode, which allows traffic from and to the parent device, but not to other addresses on the same parent device.
Now, I would like to allow specific traffic between specific containers, so pfSense must route the traffic to the same interface it received the traffic, considering the configured firewall rules (read, if incoming traffic on the internal interface matches a PASS rule it shall be forwarded to a host on the same interface / same subnet).
I can't seem to get that scenario working (no traffic between the hosts on the internal interface, traffic from and to the external network works as expected). Any ideas on how to proceed from here?
Is there any configuration item in FreeBSD in general or pfSense specifically the prevents such scenarios, like "filter traffic on own interface" or "in practice this should not happend because traffic is forwarded on the switch in front of the router because it is the same subnet so do nothing with it"?
Interestingly enough, pfSense does not even reply to the ARP request (which might have the same reason):
[root@server ~]# ip r
default via 10.0.20.1 dev server proto static metric 410
10.0.20.0/24 dev server proto kernel scope link src 10.0.20.2 metric 410
21:52:49.651286 ARP, Request who-has 10.0.20.4 tell 10.0.20.2, length 28
21:52:50.673895 ARP, Request who-has 10.0.20.4 tell 10.0.20.2, length 28
21:52:51.697860 ARP, Request who-has 10.0.20.4 tell 10.0.20.2, length 28
21:52:52.721992 ARP, Request who-has 10.0.20.4 tell 10.0.20.2, length 28
I would assume a response with the MAC of the 10.0.20.0/24 interface. The trace was made on the firewall on that interface (PING from the firewall to 10.0.20.4 works as expected).
When adding the entry manually I can see the ICMP echo request, but no reply:
[root@server ~]# arp -s 10.0.20.4 02:42:0a:00:14:04
10.0.20.4 ether 02:42:0a:00:14:04 CM server
22:00:21.403515 IP 10.0.20.2 > 10.0.20.4: ICMP echo request, id 5622, seq 1, length 64
22:00:22.450162 IP 10.0.20.2 > 10.0.20.4: ICMP echo request, id 5622, seq 2, length 64
22:00:23.473790 IP 10.0.20.2 > 10.0.20.4: ICMP echo request, id 5622, seq 3, length 64
22:00:24.497803 IP 10.0.20.2 > 10.0.20.4: ICMP echo request, id 5622, seq 4, length 64
routing docker kvm-virtualization freebsd pfsense
routing docker kvm-virtualization freebsd pfsense
New contributor
New contributor
New contributor
asked Apr 5 at 18:07
LarsLars
62
62
New contributor
New contributor
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
It depends in part on how you intend for one container to reference another. It would be worth considering whether this network topology is suitable for your use case and security policy.
If you wish for container A to address container B directly by hostname or IP address, this needs to follow Layer 2 switching and Layer 3 routing:
- If they are in the same subnet then they will only be able to address each other directly through the switch (which will be blocked by VEPA as you suggested). The pfSense host will only see the traffic if:
- the traffic destination IP is outside of the container subnet, in which case the container will send the traffic to the default gateway to be routed; or
- the traffic is directly addressed to the pfSense hostname/IP address.
- If you placed each container on a different subnet (or even separate virtual networks) then traffic between them would be routed via the gateway (pfSense). This would place pfSense in control of firewall policy.
- If you can determine a policy of which groups of containers are permitted to access each other directly, it would make more sense to group these into virtual/docker networks; then the problem goes away. In this case, you're aligning the network topology with the security policy. This is generally easier to get right, maintain, and for others to reason with.
You could look at Docker Macvlan's 802.1q trunk bridge mode which might make it easier to attach multiple container networks to the same libvirt/pfSense interface.
Alternatively, it may happen that there are only a very narrow and consistent set of ways in which containers access each other. In this case, you could consider port forwarding. You would:
- identify a specific container service which you wish to be accessible to other containers
- on pfSense, forward incoming traffic on a particular port of the MACVLAN interface to the IP/port of the desired container
- when other containers need to access the service, they use the IP address (and nominated port) of the pfSense host (not the target container)
In this case, the other containers don't know anything about where the target service is - it may as well be hosted directly on the pfSense host. Note however that this doesn't scale well, and would only work for simple TCP/UDP traffic (FTP could be painful to set up).
There may well also be some other Docker or libvirt networking features which would allow you to define more detailed firewall policy between containers on a virtual network, though I haven't looked deeply into this myself.
Aligning the network topology with the security policy sounds reasonable, the problem I see is that this bypasses the firewall, so there is no fine-grained control over the traffic. 802.1Q on the other hand means that there will be a VLAN for each container, and thus a interface for each container, which requires bridging within the KVM and a duplication of the rules for each container, which increases the maintainence effort and the required processing power (and thus power draw) - the same goes with port forwarding.
– Lars
2 days ago
Do you have any idea whether FreeBSD / pfSense allows for activating hairpinning / reflective relay functionality on a port (resp on a bridge, if this is not possible). To me this seems to be the reason ("... if the destination is on the same segment as the origin segment, the bridge will drop the packet because the receiver has already had a chance to see the frame.").
– Lars
2 days ago
Fair enough, but perhaps that's then your security policy - isolate everything. Absolutely true about the maintenance effort. However I don't think the processing/power concern is mitigated in your original case (esp. for port fowarding), since any topology where pfSense is able to block/allow the traffic will require it to forward packets and apply a ruleset.
– Samuel Jaeschke
yesterday
Unfortunately I don't have a deep understanding of iptables nuances specific to FreeBSD. I've done "router on a stick", but that instead nvolves different subnets and aliased interfaces. What you're describing almost sounds like SDN, which would be overkill. I do wonder whether there's an existing solution to this problem within the Docker ecosystem though.
– Samuel Jaeschke
yesterday
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "2"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Lars is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f961741%2fforward-traffic-on-same-interface%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
It depends in part on how you intend for one container to reference another. It would be worth considering whether this network topology is suitable for your use case and security policy.
If you wish for container A to address container B directly by hostname or IP address, this needs to follow Layer 2 switching and Layer 3 routing:
- If they are in the same subnet then they will only be able to address each other directly through the switch (which will be blocked by VEPA as you suggested). The pfSense host will only see the traffic if:
- the traffic destination IP is outside of the container subnet, in which case the container will send the traffic to the default gateway to be routed; or
- the traffic is directly addressed to the pfSense hostname/IP address.
- If you placed each container on a different subnet (or even separate virtual networks) then traffic between them would be routed via the gateway (pfSense). This would place pfSense in control of firewall policy.
- If you can determine a policy of which groups of containers are permitted to access each other directly, it would make more sense to group these into virtual/docker networks; then the problem goes away. In this case, you're aligning the network topology with the security policy. This is generally easier to get right, maintain, and for others to reason with.
You could look at Docker Macvlan's 802.1q trunk bridge mode which might make it easier to attach multiple container networks to the same libvirt/pfSense interface.
Alternatively, it may happen that there are only a very narrow and consistent set of ways in which containers access each other. In this case, you could consider port forwarding. You would:
- identify a specific container service which you wish to be accessible to other containers
- on pfSense, forward incoming traffic on a particular port of the MACVLAN interface to the IP/port of the desired container
- when other containers need to access the service, they use the IP address (and nominated port) of the pfSense host (not the target container)
In this case, the other containers don't know anything about where the target service is - it may as well be hosted directly on the pfSense host. Note however that this doesn't scale well, and would only work for simple TCP/UDP traffic (FTP could be painful to set up).
There may well also be some other Docker or libvirt networking features which would allow you to define more detailed firewall policy between containers on a virtual network, though I haven't looked deeply into this myself.
Aligning the network topology with the security policy sounds reasonable, the problem I see is that this bypasses the firewall, so there is no fine-grained control over the traffic. 802.1Q on the other hand means that there will be a VLAN for each container, and thus a interface for each container, which requires bridging within the KVM and a duplication of the rules for each container, which increases the maintainence effort and the required processing power (and thus power draw) - the same goes with port forwarding.
– Lars
2 days ago
Do you have any idea whether FreeBSD / pfSense allows for activating hairpinning / reflective relay functionality on a port (resp on a bridge, if this is not possible). To me this seems to be the reason ("... if the destination is on the same segment as the origin segment, the bridge will drop the packet because the receiver has already had a chance to see the frame.").
– Lars
2 days ago
Fair enough, but perhaps that's then your security policy - isolate everything. Absolutely true about the maintenance effort. However I don't think the processing/power concern is mitigated in your original case (esp. for port fowarding), since any topology where pfSense is able to block/allow the traffic will require it to forward packets and apply a ruleset.
– Samuel Jaeschke
yesterday
Unfortunately I don't have a deep understanding of iptables nuances specific to FreeBSD. I've done "router on a stick", but that instead nvolves different subnets and aliased interfaces. What you're describing almost sounds like SDN, which would be overkill. I do wonder whether there's an existing solution to this problem within the Docker ecosystem though.
– Samuel Jaeschke
yesterday
add a comment |
It depends in part on how you intend for one container to reference another. It would be worth considering whether this network topology is suitable for your use case and security policy.
If you wish for container A to address container B directly by hostname or IP address, this needs to follow Layer 2 switching and Layer 3 routing:
- If they are in the same subnet then they will only be able to address each other directly through the switch (which will be blocked by VEPA as you suggested). The pfSense host will only see the traffic if:
- the traffic destination IP is outside of the container subnet, in which case the container will send the traffic to the default gateway to be routed; or
- the traffic is directly addressed to the pfSense hostname/IP address.
- If you placed each container on a different subnet (or even separate virtual networks) then traffic between them would be routed via the gateway (pfSense). This would place pfSense in control of firewall policy.
- If you can determine a policy of which groups of containers are permitted to access each other directly, it would make more sense to group these into virtual/docker networks; then the problem goes away. In this case, you're aligning the network topology with the security policy. This is generally easier to get right, maintain, and for others to reason with.
You could look at Docker Macvlan's 802.1q trunk bridge mode which might make it easier to attach multiple container networks to the same libvirt/pfSense interface.
Alternatively, it may happen that there are only a very narrow and consistent set of ways in which containers access each other. In this case, you could consider port forwarding. You would:
- identify a specific container service which you wish to be accessible to other containers
- on pfSense, forward incoming traffic on a particular port of the MACVLAN interface to the IP/port of the desired container
- when other containers need to access the service, they use the IP address (and nominated port) of the pfSense host (not the target container)
In this case, the other containers don't know anything about where the target service is - it may as well be hosted directly on the pfSense host. Note however that this doesn't scale well, and would only work for simple TCP/UDP traffic (FTP could be painful to set up).
There may well also be some other Docker or libvirt networking features which would allow you to define more detailed firewall policy between containers on a virtual network, though I haven't looked deeply into this myself.
Aligning the network topology with the security policy sounds reasonable, the problem I see is that this bypasses the firewall, so there is no fine-grained control over the traffic. 802.1Q on the other hand means that there will be a VLAN for each container, and thus a interface for each container, which requires bridging within the KVM and a duplication of the rules for each container, which increases the maintainence effort and the required processing power (and thus power draw) - the same goes with port forwarding.
– Lars
2 days ago
Do you have any idea whether FreeBSD / pfSense allows for activating hairpinning / reflective relay functionality on a port (resp on a bridge, if this is not possible). To me this seems to be the reason ("... if the destination is on the same segment as the origin segment, the bridge will drop the packet because the receiver has already had a chance to see the frame.").
– Lars
2 days ago
Fair enough, but perhaps that's then your security policy - isolate everything. Absolutely true about the maintenance effort. However I don't think the processing/power concern is mitigated in your original case (esp. for port fowarding), since any topology where pfSense is able to block/allow the traffic will require it to forward packets and apply a ruleset.
– Samuel Jaeschke
yesterday
Unfortunately I don't have a deep understanding of iptables nuances specific to FreeBSD. I've done "router on a stick", but that instead nvolves different subnets and aliased interfaces. What you're describing almost sounds like SDN, which would be overkill. I do wonder whether there's an existing solution to this problem within the Docker ecosystem though.
– Samuel Jaeschke
yesterday
add a comment |
It depends in part on how you intend for one container to reference another. It would be worth considering whether this network topology is suitable for your use case and security policy.
If you wish for container A to address container B directly by hostname or IP address, this needs to follow Layer 2 switching and Layer 3 routing:
- If they are in the same subnet then they will only be able to address each other directly through the switch (which will be blocked by VEPA as you suggested). The pfSense host will only see the traffic if:
- the traffic destination IP is outside of the container subnet, in which case the container will send the traffic to the default gateway to be routed; or
- the traffic is directly addressed to the pfSense hostname/IP address.
- If you placed each container on a different subnet (or even separate virtual networks) then traffic between them would be routed via the gateway (pfSense). This would place pfSense in control of firewall policy.
- If you can determine a policy of which groups of containers are permitted to access each other directly, it would make more sense to group these into virtual/docker networks; then the problem goes away. In this case, you're aligning the network topology with the security policy. This is generally easier to get right, maintain, and for others to reason with.
You could look at Docker Macvlan's 802.1q trunk bridge mode which might make it easier to attach multiple container networks to the same libvirt/pfSense interface.
Alternatively, it may happen that there are only a very narrow and consistent set of ways in which containers access each other. In this case, you could consider port forwarding. You would:
- identify a specific container service which you wish to be accessible to other containers
- on pfSense, forward incoming traffic on a particular port of the MACVLAN interface to the IP/port of the desired container
- when other containers need to access the service, they use the IP address (and nominated port) of the pfSense host (not the target container)
In this case, the other containers don't know anything about where the target service is - it may as well be hosted directly on the pfSense host. Note however that this doesn't scale well, and would only work for simple TCP/UDP traffic (FTP could be painful to set up).
There may well also be some other Docker or libvirt networking features which would allow you to define more detailed firewall policy between containers on a virtual network, though I haven't looked deeply into this myself.
It depends in part on how you intend for one container to reference another. It would be worth considering whether this network topology is suitable for your use case and security policy.
If you wish for container A to address container B directly by hostname or IP address, this needs to follow Layer 2 switching and Layer 3 routing:
- If they are in the same subnet then they will only be able to address each other directly through the switch (which will be blocked by VEPA as you suggested). The pfSense host will only see the traffic if:
- the traffic destination IP is outside of the container subnet, in which case the container will send the traffic to the default gateway to be routed; or
- the traffic is directly addressed to the pfSense hostname/IP address.
- If you placed each container on a different subnet (or even separate virtual networks) then traffic between them would be routed via the gateway (pfSense). This would place pfSense in control of firewall policy.
- If you can determine a policy of which groups of containers are permitted to access each other directly, it would make more sense to group these into virtual/docker networks; then the problem goes away. In this case, you're aligning the network topology with the security policy. This is generally easier to get right, maintain, and for others to reason with.
You could look at Docker Macvlan's 802.1q trunk bridge mode which might make it easier to attach multiple container networks to the same libvirt/pfSense interface.
Alternatively, it may happen that there are only a very narrow and consistent set of ways in which containers access each other. In this case, you could consider port forwarding. You would:
- identify a specific container service which you wish to be accessible to other containers
- on pfSense, forward incoming traffic on a particular port of the MACVLAN interface to the IP/port of the desired container
- when other containers need to access the service, they use the IP address (and nominated port) of the pfSense host (not the target container)
In this case, the other containers don't know anything about where the target service is - it may as well be hosted directly on the pfSense host. Note however that this doesn't scale well, and would only work for simple TCP/UDP traffic (FTP could be painful to set up).
There may well also be some other Docker or libvirt networking features which would allow you to define more detailed firewall policy between containers on a virtual network, though I haven't looked deeply into this myself.
answered Apr 6 at 8:54
Samuel JaeschkeSamuel Jaeschke
18827
18827
Aligning the network topology with the security policy sounds reasonable, the problem I see is that this bypasses the firewall, so there is no fine-grained control over the traffic. 802.1Q on the other hand means that there will be a VLAN for each container, and thus a interface for each container, which requires bridging within the KVM and a duplication of the rules for each container, which increases the maintainence effort and the required processing power (and thus power draw) - the same goes with port forwarding.
– Lars
2 days ago
Do you have any idea whether FreeBSD / pfSense allows for activating hairpinning / reflective relay functionality on a port (resp on a bridge, if this is not possible). To me this seems to be the reason ("... if the destination is on the same segment as the origin segment, the bridge will drop the packet because the receiver has already had a chance to see the frame.").
– Lars
2 days ago
Fair enough, but perhaps that's then your security policy - isolate everything. Absolutely true about the maintenance effort. However I don't think the processing/power concern is mitigated in your original case (esp. for port fowarding), since any topology where pfSense is able to block/allow the traffic will require it to forward packets and apply a ruleset.
– Samuel Jaeschke
yesterday
Unfortunately I don't have a deep understanding of iptables nuances specific to FreeBSD. I've done "router on a stick", but that instead nvolves different subnets and aliased interfaces. What you're describing almost sounds like SDN, which would be overkill. I do wonder whether there's an existing solution to this problem within the Docker ecosystem though.
– Samuel Jaeschke
yesterday
add a comment |
Aligning the network topology with the security policy sounds reasonable, the problem I see is that this bypasses the firewall, so there is no fine-grained control over the traffic. 802.1Q on the other hand means that there will be a VLAN for each container, and thus a interface for each container, which requires bridging within the KVM and a duplication of the rules for each container, which increases the maintainence effort and the required processing power (and thus power draw) - the same goes with port forwarding.
– Lars
2 days ago
Do you have any idea whether FreeBSD / pfSense allows for activating hairpinning / reflective relay functionality on a port (resp on a bridge, if this is not possible). To me this seems to be the reason ("... if the destination is on the same segment as the origin segment, the bridge will drop the packet because the receiver has already had a chance to see the frame.").
– Lars
2 days ago
Fair enough, but perhaps that's then your security policy - isolate everything. Absolutely true about the maintenance effort. However I don't think the processing/power concern is mitigated in your original case (esp. for port fowarding), since any topology where pfSense is able to block/allow the traffic will require it to forward packets and apply a ruleset.
– Samuel Jaeschke
yesterday
Unfortunately I don't have a deep understanding of iptables nuances specific to FreeBSD. I've done "router on a stick", but that instead nvolves different subnets and aliased interfaces. What you're describing almost sounds like SDN, which would be overkill. I do wonder whether there's an existing solution to this problem within the Docker ecosystem though.
– Samuel Jaeschke
yesterday
Aligning the network topology with the security policy sounds reasonable, the problem I see is that this bypasses the firewall, so there is no fine-grained control over the traffic. 802.1Q on the other hand means that there will be a VLAN for each container, and thus a interface for each container, which requires bridging within the KVM and a duplication of the rules for each container, which increases the maintainence effort and the required processing power (and thus power draw) - the same goes with port forwarding.
– Lars
2 days ago
Aligning the network topology with the security policy sounds reasonable, the problem I see is that this bypasses the firewall, so there is no fine-grained control over the traffic. 802.1Q on the other hand means that there will be a VLAN for each container, and thus a interface for each container, which requires bridging within the KVM and a duplication of the rules for each container, which increases the maintainence effort and the required processing power (and thus power draw) - the same goes with port forwarding.
– Lars
2 days ago
Do you have any idea whether FreeBSD / pfSense allows for activating hairpinning / reflective relay functionality on a port (resp on a bridge, if this is not possible). To me this seems to be the reason ("... if the destination is on the same segment as the origin segment, the bridge will drop the packet because the receiver has already had a chance to see the frame.").
– Lars
2 days ago
Do you have any idea whether FreeBSD / pfSense allows for activating hairpinning / reflective relay functionality on a port (resp on a bridge, if this is not possible). To me this seems to be the reason ("... if the destination is on the same segment as the origin segment, the bridge will drop the packet because the receiver has already had a chance to see the frame.").
– Lars
2 days ago
Fair enough, but perhaps that's then your security policy - isolate everything. Absolutely true about the maintenance effort. However I don't think the processing/power concern is mitigated in your original case (esp. for port fowarding), since any topology where pfSense is able to block/allow the traffic will require it to forward packets and apply a ruleset.
– Samuel Jaeschke
yesterday
Fair enough, but perhaps that's then your security policy - isolate everything. Absolutely true about the maintenance effort. However I don't think the processing/power concern is mitigated in your original case (esp. for port fowarding), since any topology where pfSense is able to block/allow the traffic will require it to forward packets and apply a ruleset.
– Samuel Jaeschke
yesterday
Unfortunately I don't have a deep understanding of iptables nuances specific to FreeBSD. I've done "router on a stick", but that instead nvolves different subnets and aliased interfaces. What you're describing almost sounds like SDN, which would be overkill. I do wonder whether there's an existing solution to this problem within the Docker ecosystem though.
– Samuel Jaeschke
yesterday
Unfortunately I don't have a deep understanding of iptables nuances specific to FreeBSD. I've done "router on a stick", but that instead nvolves different subnets and aliased interfaces. What you're describing almost sounds like SDN, which would be overkill. I do wonder whether there's an existing solution to this problem within the Docker ecosystem though.
– Samuel Jaeschke
yesterday
add a comment |
Lars is a new contributor. Be nice, and check out our Code of Conduct.
Lars is a new contributor. Be nice, and check out our Code of Conduct.
Lars is a new contributor. Be nice, and check out our Code of Conduct.
Lars is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Server Fault!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f961741%2fforward-traffic-on-same-interface%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown