i'm working on a project that will require Websockets. Actually, it'll require quite a few of them. i suspect that, by the time i'm done, i may need to have servers holding open 400,000,000 of the little darlings, with about 20% churn as connections are dropped and reopened. Since The Cloud doesn't really exist for me (this will probably be a ranty follow up post, but let's just say that i'm working at the level that most folks call "The Cloud", so i have to actually make stuff work), i need to figure out how to make this happen.
Somewhat surprisingly, there's very little information online about this. It could be that the folks that have figured this out have decided that it's proprietary information that they can use for whatever makes them happy, but since i don't have that problem, i'm going to happily post what i've found out here.
By the way, i would be thrilled if someone were to point out my plate ignorance and provide a set of test numbers and profiles that correct whatever errors i've made. i'm publishing this because i couldn't find anything like this, not because i consider it definitive.
Knowing what you know
i'll note that a fair bit of this was done with the able assistance of Ryan Tilder. One of the problems was determining what to use to actually test things out. There are a lot of technologies out there that promise a good deal, but as i learned at the previous employer, promises are nothing against real numbers. Node.js and Python Tornado both promise that they hold open a phenomenal number of sockets, and a quick mock program in each showed that they were reporting lots of socket connections. The problem was that when we did a netstat, we didn't see anywhere near the same number of active sockets open. We're not 100% sure why this was, but they were both fairly quickly eliminated from testing.
Eventually we settled on two candidate applications. For the socket client app, we picked Go, which not only reported back the same number of active sockets as netstat did, but also kept a fairly large number of sockets open at any time. The other candidate server was a simple Java/Netty demo application. This also just echoed back whatever was tossed at it.
For test instances we picked an AWS Small, Medium and Large as servers. For clients we spun up a number of Smalls. (We very specifically did not pick Micros because they are the instance time most subject to external effect. Such as, say, being a micro on one of the boxes when we were doing a test.)
We applied the following tweaks to the boxes to try and boost the available number of handles:
## Increase the file handle space
# Increase the ipv4 port range:
sysctl -w net.ipv4.ip_local_port_range = 1024 65535
# General gigabit tuning:
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_syncookies = 1
# This gives the kernel more memory for tcp which you need with many (100k+) open socket connections
net.ipv4.tcp_mem=50576 64768 98152
" >> /etc/sysctl.conf
# AWS sets the ulimit to "unlimited". This is optional:
# ulimit -n 999999
# modify /etc/security/limits.conf
* soft nofile 50000
* hard nofile 50000
" >> /etc/security/limits.conf
# modified /etc/sshd.conf
" >> /etc/sshd/sshd_config
i then built a very simple Go based "pounder" that opens up sockets on whatever server you want, and then sends data across the port every specified period. For the servers, we either ran a netty demo server, or the craptastic "srv.go" in go_pound that just echoed back what it got. We don't really care about it doing much more than writing and clearing the socket data. We wanted it to do more than "just exist" since the actual program will also be doing more than that.
Knowing what you don't know
You'll also note that we're not going through the ELB at this point. We were trying to reduce the number of potential variables, although i will note that for websockets, you have to use "TCP" as the protocol, not HTTP. ELB aggressively terminates HTTP socket connections.
We also knew that since sockets are effectively file handles, we wanted to eliminate as many open files or extra connections we could. i won't go into details about what we did, but basically we got things to under 100 active file connections which we felt was a reasonable number of system level handles.
We did encounter some unusual things. One being that smalls are VERY UNHAPPY when you use more than about 24K sockets. (24,500 is kind of pushing things.) At that number, SSH becomes fairly unstable and you may have to reboot the box externally. This may be an issue with how we set net.ipv4.tcp_[rw]mem and how AWS actually provisions memory. i'll admit that the high number i picked there was a bit aggressive and potentially optimistic, but since we were creating fairly specialized boxes and were hitting max, i'm not really concerned. If you're planning on sticking to smalls, you may want to do a tad more homework and pick a more appropriate number.
Knowing what you didn't know
So… the numbers.
Well, here's what we saw (all numbers rounded down)
These were the mean results from multiple runs. The small pounders showed the most variance, while the larges were more stable. We were trying to determine a rough baseline and not an accurate survey.
1Go topped out at 120K when being pounded by requests every 8s. When we increased this number to 20s, it was able to handle the load. >20s between transactions is a more realistic number for our project, but i like being aggressive for testing like this.
2Larges also seemed to top out at just over 200K connections. Again, this may be something related to the tcp_mem setting. We may need to do additional testing on this.
3Netty was able to handle both 8s and 20s transaction, with CPU being the more limiting factor.
Not Knowing the Unknowns
Is this a complete win for Netty? Possibly. There are a few other factors that i'll have to consider before switching over to just that (logging, operational and developmental familiarity, etc.) Likewise, running enough "larges" to handle the problem may solve the issue to the extend that building out the Java server isn't cost effective.
One thing Ryan points out is that when you start dealing with Very Large numbers of concurrent connections, you start to seriously tax your CPUs. If you have a lot of very active channels, that tax goes up a great deal more. It may make sense to opt for CPU friendly configurations, more so than memory heavy ones. i'll try to remember to post up what sort of configurations we found work best for us if we stay with AWS as a solution.
Still, these are the numbers that we came up with. Hopefully, they'll be reasonably useful to you.