mcormie
January 14th, 2003, 20:33
Hello all,

I have a potential application that may stress the limits of the OpenBSD bridging / VPN / PF, and wanted to get some thoughts and opinions.

Scenario:

1) 4 bridged physical interfaces
2) @gigabit speeds - Might run into PCI bus throughput issues.
3) VPN bridge another network segment over a MAN (@gigabit speeds) - CPU overhead issues at these speeds?
4) Packet filtering between these five logical interfaces - again might be CPU / IO bus limitations.

Is this possible? Has anyone tested or benchmarked bridging and VPN at gigabit speeds?

Thanks for your feedback,
Matt

bsdjunkie
January 14th, 2003, 21:07
Ive never had troubles with traffic at full 100mbps. but never tried gig speeds. Cant help ya there. But, your probably right about the bus slowing you down as well with 4 nterfaces for sure.

elmore
January 14th, 2003, 21:37
As of this pf release notes, I would doubt that pf has the limitations that you're speaking of.

http://www.benzedrine.cx/openbsd-31-announce.txt

Have you tried rulesets that take advantage of all of these new features? Hmmmm.... I'd like to help, perhaps posting two things might better help us facilitate a proper answer.


1. A simple network diagram of traffic flow. Including all uplink speeds.
2. A copy of your ruleset.

I am certain with the talent on this board, particularly in this area, we can come up with a great solution to your problem.

mcormie
January 15th, 2003, 02:23
I will clarify the situation. We are currently using a FreeBSD based firewall (Drawbridge) because it was one of the only to support FDDI interfaces at the time of install. We are in the process of upgrading our MAN connection to gigabit Ethernet, and I would like to use the oportunity upgrade the firewall to one that does stateful packet inspection as well, as the OpenBSD Packet Filter can do.

To add to the mix, we are moving some staff to a temporary building for 2 years while a new building is being completed. The requirements of the new building occupants are that of a flat network. The options are dedicated fiber, leased line, or VPN bridging between networks. This is what drives the bridging over VPN technical requirements.

Lastly I would like to seperate the WLAN and server traffic from that of the LAN for security reasons. This drives the 4 interface requirment.

http://members.shaw.ca/hungfut/AsciiNetwork.gif

elmore
January 18th, 2003, 03:22
So what you got is two buildings both with internet connections and a vpn between the two. In addition you're looking at two layer 3 LANS. I assume, you're looking at the OBSD box routing between your LAN segments because you don't have a msfc.

That's what I get from your drawing, is this correct?

The question I have is how beefy is the box running your firewall? It's not a 486 or something is it?

I have a comment for you as well, I'm sure your well aware that any packet destined to leave subnet a for subnet b even though it's in the same building has to then cross over your OBSD FW which is also running your FW service to the internet and your VPN, for your MAN.

I'm sure this is possible to do on a singlebox provided the hardware is beefy enough, however your might be better served to you could place a second OBSD box inside your network to act as an msfc. Just a thought. Get back to me let me know that what I'm thinking is correct and we'll get started.

mcormie
January 18th, 2003, 15:24
First of I apreciate any discussion on this topic. Please don't mistake this as a 'design my network for me because i'm lazy' question. I understand that it probably seems that way due to its scope, but is not the case.

I am interested in hearing anyones experiences with gigabit speed packet filtering and VPN's. This could help me estimate the hardware required, if it is even possible. So to answer your question elmore I currently have a number of machines to test with, but am aware that I may need bigger hardware. In particular I was thinking that I may need a server with 64bit/66MHz PCI slots, so I don't saturate the PCI bus.

In theory:
Gigabit ethernet ~= 100MB/s. (1000Mbits / 8bits/byte - CSMA/CD)
PCI 2.2 bus = 133MB/s (33MHz * 32bit / 8bits/byte)
PCI-X bus = 1064MB/s (133MHz * 64bit / 8bits/byte)

In practice
Gigabit ethernet ~= 50MB/s (400Mbits/s / 8bits/byte)

I should be able run 2 gigbit ethernet interfaces in a PCI 2.2 compliant box and get full real world bandwidth without saturating the bus. I may be able to have 4 interfaces in the same box if average ethernet interface utilization was less than 65% which they should be.

</ramble mode>Back on topic here.. Elmore, I agree with you that I may want to seperate the VPN bridge onto another server, which relates to my concern over CPU usage. Do you have any information about the CPU overhead for packet filtering and encryption?

Kernel_Killer
January 18th, 2003, 17:10
Do you have a server that you can dedicate the VPN and encryption to? One that can handle the CPU overhead easily? I don't know about throwing that info over the network, unless you are talking about 2 servers sharing the responsibility that are the same subnet backbone. Maybe like a MDC, and a BDC.

I take it this network is partially deterministic based? I could see where you could get some bottlenecks if it was all contention based.

mcormie
January 18th, 2003, 18:05
Having seperate machines for bridged VPN and for bridging packet filtering is definitely an option. This will probably be determined by performance, cost and administration issues.

I am not familiar with MDC and PDC. Are you referring to load balancing or failover?

On that note does OpenBSD PF support spanning tree protocol? Has anyone used/tested this for bridge failover?

elmore
January 18th, 2003, 19:10
Hey sorry didn't mean to come of as if I was thinking you needed design help. I merely wanted to make sure I understood your diagram correctly. If I offended you in any way I apologize.

On with things. As far as VPN and pf overhead, your box should be able to handle that with no problems, I run a super beefy VPN with on my firewalls. However none of these are at gig speeds. Information I have seen thus far shows that the gx interface still has some problems, and that the ti interface is best to use. Here's a link supporting that.

http://www.netsys.com/openbsd-tech/2002/09/msg00106.html

I have been running the ti interface on a couple of boxes since 2.6 and I have never had any problems. I have two networks here at the house which have a bridge in between, both running gige interfaces using the ti driver and I have never had a problem, although, I'm not nearly pushing the amount of traffic you'll be doing and it's only two gig cards not four. The box they run on has pci 2.2 compliant slots and a 133mhz fsb. I think that theoretically this should work, in practice you might have some problems.

In searching around I haven't come accross anyone using 4 gig interfaces in a single box. Again my concern is that virtually every packet on your net is going to have to pass through the OBSD box, because you're using it as an msfc, since presumably all computers in your LAN and WLAN, need to get routed to your Server LAN, I just don't know that the box will be able to keep up, if it's also running a vpn and firewall. That's a lot to ask from any box I think.

In that respect you might be pushing the limits, I think though that two OBSD boxes, one running VPN and firewalling and one doing internal routing might suit you better.

As far as cpu uses go, pf and ISAKMPD, are extremely, optimized. On my corporate net, I have a VPN serving, 6 locations, using Blowfish Main mode, and default deny rulesets for pf, I have never seen any CPU issues, the boxes I run these on are workstation class pentium III's, The VPN also servers about 100 or so remote clients passive VPN connections via dial-up or DSL or Cable. Never the CPU issue or memory issue.

ISAKMPD will occasionally core, but a simple crontab will make sure that it's never more than a few seconds. I have seen reports in the mailing lists that the gig interfaces cause some overhead, on my own boxes I have not substantiated this.

I'm pretty sure pf doesn't support STP at this time, I could be wrong about that but I can find no documentation on it anywhere.

I hope this helps mcormie,

I'm real interested to find out how thisone goes for you, you might be doing something that at most few others have tried with OBSD, let us know how it goes.

Also, if we can assist you in any other fashion on this project I know at least I'd be more than happy to. Good Luck!

bsdjunkie
January 18th, 2003, 22:04
If you would like to take off some of the overhead of encryption of your cpu, check out the vpn acclerator card from Soekris engineering.

www.soekris.com

Im ordering a board/crypto card and case for my new firewall. Another friend has one, and they are awesome. Fully supported in obsd of course.

:roll:

mcormie
January 19th, 2003, 00:53
No appology required elmore. That was exactly the kind of information I was looking for.

The comments on the em (Intel) driver were last April, but the OpenBSD 3.2 changelog (Nov release) list the em driver added. The lists also suggest that getting documentation from Intel was near impossible, but the em driver Author is listed as Intel. Very Strange. Did Intel re-write the em driver?
http://www.openbsd.org/cgi-bin/man.cgi?query=em&sektion=4#end

Netgear seems to have switched from the suggested ti chipset to the nge (National Semiconductor) chipset. It seems harder to get an adapter with the ti chipset, as both 3com and NetGear have discontinued their ti based adapters. 3Com has switched to the bge (broadcom) chipset. Anyone have any experience with either of these?

bsdjunkie - Thanks for the crypto card tip. Seems like a good deal for $92.00, especially if it performs as advertised.

Further research led to this information http://www.deadly.org/article.php3?sid=20010925002054 regarding spanning tree bridging being supported. In that link is a reference to a Belgium ISP using 8 ethernet interfaces (2 quad cards) in a single computer.

I keep being more amazed by the net code in OpenBSD.

Strog
January 20th, 2003, 16:09
4 gigabit cards will definitely cause contention on a conventional PCI bus. There are some alternatives but the cost will go up obviously. There are motherboards with multiple busses, 64 bit slots, 66Mhz+ slots or even all of the above.

Tyan has a nice board (Thunder i7500) that has
• Six 184-pin 2.5V DDR DIMM sockets
• Supports up to 12GB of Registered DDR266/200
• Dual channel memory bus
• Supports 72-bit ECC memory

• Five independent PCI(-X) buses
• Two 64-bit 133/100/66MHz (3.3V) PCI-X slots
• Two 64-bit 66/33MHz (3.3-Volt) PCI-X slot
• One 32-bit 33MHz (5V) PCI slot

(option)
• QLogic™ Zircon Baseboard Management
Controller (BMC)
• Supports the Intelligent Platform Management
Interface (IPMI)
• I 2C and IPMB

I know you can't use the dual processor and don't need near 12GB of ram but 5 independent busses and dual channel DDR memory would eliminate most of the bottlenecks you would otherwise run into. Most systems you could build with lower quality parts would work fine when you start out but won't scale well and definitely won't keep you anywhere near wirespeed. I know this $400 motherboard costs a little more and you will have to buy more expensive 64bit gigabit cards but the returns is totally worth it for such a critical piece of equipment at the core of your infrastructure. It looks like you were already thinking of at least 64bit/66Mhz cards anyway.

I agree 100% with bsdjunkie, get the vpn acclerator card. It will help offload a lot of the work. I would still put as fast as a CPU in there as you can get your hands on. You buy yourself some headroom as your pf rules evolve and get more sophisticated.

Just a few thoughts I had.

No, I don't work for Tyan. I did build custom servers, workstations, etc. for a company that dealt with Asus, Tyan, SuperMicro, etc. and have a bit of experience with them. I would stick with the Intel i7500 or recent Serverworks chipsets for stability and performance reasons.