Blog post -

Updated version of the Open Source load testing tool benchmarks conducted earlier this year

Earlier this year, we wrote a couple of articles reviewing the functionality, usability (link to review article) and performance (link to benchmark article) of a bunch of open-source load testing tools. Since then, we have uncovered our own open-source load testing tool — k6 — and it would seem it is time for an update to our reviews and benchmarks, so that we both include k6 but also test newer versions of the other tools out there.

Read the full story here.

The tools tested: ApacheBench, Artillery, Gatling, Grinder, Hey (previously "Boom"), Jmeter, k6, Locust, Siege, Tsung, Wrk.

The tests

We executed tests similar to the ones we did in the old benchmark, but we only used 10 ms of simulated network delay, and because of the earlier mentioned issues with Siege, we only tested four different VU levels: 20, 50, 100 and 250 VU. We also ran the tests for only 120 seconds, as opposed to 300 seconds earlier. The number of requests made in each test varied from about 100,000 to 3,000,000 depending on the number of VUs and how fast the tool in question was.

We looked at RPS (Requests Per Second) rates for the different tools, at the different VU levels, and also how much extra delay the tools added to transactions.
As we can see, when it comes to pushing through as many transactions per second as possible, Wrk rules as usual. Apachebench is a hair’s breadth behind Wrk though, and Hey is also quite close. All three did over 20k RPS when simulating 250 VU and the network delay was 10 ms.

After the top three, we have a bunch of decent performers that include Jmeter, k6, Gatling, Tsung and the Grinder. These tools perform maybe 25% worse than the top three, in terms of raw request generation ability. As the difference is not great, I would say that for most people the top eight tools in this list are all very much usable in terms of request generation performance.

The bottom three tools are another story. They can only generate a fraction of the traffic that the top 8 tools can generate. Especially Locust is very limited here; Where most tools in this test manage to produce 15,000 RPS or more, Locust can do less than 500. Artillery is better, but still very poor in terms of request generation, and will only get you close to 1,500 RPS. Siege stops at 3,000 RPS, roughly. These three tools are best used if you only need to generate small amounts of traffic. One point in Locust’s favour is that it supports distributed load generation, which allows you to configure load generator slaves and in that way increase the aggregated request generation ability. It means that Locust can be scaled by throwing hardware at the problem. The question is how many slave load generators you can have before the master Locust instance packsup and goes home, as it is likely a bottleneck in the distributed setup.

Read the full story here.

Related links

Topics

  • Computers, computer technology, software

Categories

  • opensource
  • opensourceloadtest
  • loadtestingtools
  • review
  • benchmark
  • performance testing
  • load testing
  • load impact
  • k6
  • devops
  • continuous delivery
  • continuous integration

Related content