5

I want to see how far my nginx + node.js setup can go and what changes I can make to squeeze out extra performance I've stumbled on a great article detailing some tuning that can be done to the OS to withstand more requests (which I'm not sure I completely understand)

Say I want to see how it handles 60,000 requests per second for a duration of time.

I've tried apachebench and beeswithmachineguns. apachebench seems to be limited locally to about 3500 requests or something. Raising the concurrency only serves to decrease the average req/s somehow. I was able to see (claimed) ~5000 requests per second to a test page with beeswithmachineguns but that's still nowhere close to what I want. It seems to be a bit on the buggy side, however.

Is there a reliable way to simulate a huge amount of requests like this?

dsp_099
  • 5,021
  • 13
  • 59
  • 115

1 Answers1

5

You could give siege a try as well.

The article you've linked looks good to me.

Generating 60,000 rq/s and answering them at the same time will be a problem because you most definitely run out of resources. It would be best to have some other computers (maybe on the same network) to generate the requests and let your server only handle answering those.

Here's an example siege configuration for your desired 60,000 rq/s that will hit your server for one minute.

# ~/.siegerc

logfile         = $(HOME)/siege.log
verbose         = true
csv             = true
logging         = true
protocol        = HTTP/1.1
chunked         = true
cache           = false
accept-encoding = gzip
benchmark       = true
concurrent      = 60000
connection      = close
delay           = 1
internet        = false
show-logfile    = true
time            = 1M
zero-data-ok    = false

If you don't have the infrastructure to generate the load, rent it. A very great service is Blitz.IO (I'm not affiliated with them). They have an easy and intuitive interface and (most important) they can generate nearly any traffic for you.

Fleshgrinder
  • 14,476
  • 4
  • 41
  • 51
  • Looks great, I'll give it a shot. Which part of a server will give out under such load first? Will it run out of tcp sockets or is it simply a matter of CPU / RAM load etc? I'm looking to build a system that can handle that type of a load indefinitely. I was thinking of using one machine with nginx solely for the purpose of load balancing. Is this possible? – dsp_099 Nov 03 '13 at 12:12
  • The first thing you run out are TCP sockets. nginx (properly configured) has a very low CPU and RAM footprint. Node.js will increase to eat up your CPU (and later on RAM) with increasing code base. Other languages might give you more performance here (e.g. compiled PHP, PHP \w OP cache, C/C++). Indefinitely won't be possible, otherwise you'd have a system that you could easily sell to Amazon, Facebook and Google. Nginx as load balancer is no problem at all. – Fleshgrinder Nov 03 '13 at 13:10
  • If nginx has to sift through incoming requests to pawn them off to, say, 10 other servers, doesn't it run into the issue with TCP sockets too if it's being used as a load balancer, or am I missing something? Thanks for the responses – dsp_099 Nov 03 '13 at 13:24
  • You'll always hit some barrier. But a dedicated machine for a single nginx gives it all OS resources. For instance you could communicate with your upstream servers via file sockets (sshfs etc.) and you wouldn't utilize any TCP socket (plus it's much more secure). – Fleshgrinder Nov 03 '13 at 14:02