Blog post -

Testing Against DDoS Attacks Part 3 - Running the Test

Note: This is part 3 in a series of posts showing how to set up Locust in Google Cloud to do a load/stress test on your website. If you haven’t read the previous posts, you should start there. These instructions should be used ONLY for testing websites that YOU own, or have express permission to test. Please do not use these instructions for any other purposes.

In the previous two posts in this series, we covered setting up your environment, and modifying the existing configurations to your site. This post will cover how to start up, run, and shut down the Locust clusters used in the test. Verify that you are set to the right project and zone.

gcloud config set compute/zone ZONEgcloud config set project PROJECT-ID

Deploying the Cluster

The next step is to start up a kubernetes cluster. Google Compute provides several different sizes of machines to choose from. The original howto on github uses the default n1-standard-1 machine type, and a 3 node cluster. We can do better than that! The below command will start a cluster of 8 nodes of type n1-standard-2. If you want to try with bigger machines the available types are listed here. Give the cluster a unique name for the CLUSTER-NAME below.

gcloud container clusters create CLUSTER-NAME --zone ZONE --num-nodes 8 --machine-type n1-standard-2

Setting up the cluster will take a few minutes. When the kubernetes cluster is up and running you’ll see something like the below. You should see 8 nodes instead of three.

Now set the context of kubernetes to use this cluster with:

kubectl config use-context gke_PROJECT-ID_ZONE_CLUSTER-NAME

Deploying Locust Master and Worker

Using the kubectl command, we can now use kubernetes to deploy the locust-worker and locust-master configuration. To deploy the master:

cd kubernetes-config

kubectl create -f locust-master-controller.yaml

You can check the status with:

kubectl get pods -l name=locust,role=master

When the pods show as ready (it will say 1/1 in the ready column) you can proceed to the next step. It should take less than a minute. If you see an error such as ‘ErrImagePull’ in the status column, something went wrong. Check the YAML files. I’ve also seen problems with bad docker images. If that happens rebuild the docker image and try again.

Deploy the locust-master-service:

kubectl create -f locust-master-service.yaml

This configures the backend networking and routing to allow access to the UI and communication within the cluster. The following command will show the forwarding rule created by the above command. Make sure to note the IP address in the output, as it is the address to the web UI for your locust instance.

gcloud compute forwarding-rules list

Deploy the worker-controller

kubectl create -f locust-worker-controller.yaml

To verify, you can use the get pods command again. By default, 20 pods are deployed.

kubectl get pods -l name=locust,role=worker

Depending on the machine type you chose above, you can scale the number of workers. There’s a definite cutoff point where more pods aren’t possible. With the standard-2 machines used in this example, somewhere around 75-100 workers works well.

Scale the workers with kubectl

kubectl scale --replicas=75 replicationcontrollers locust-worker

Scaling up can take a few minutes, and it may take longer for all the workers to report to the controller. The final step before running a test is to configure the firewall rules. This will allow access to the UI from the internet on the IP mentioned above.

gcloud compute firewall-rules create FWRULENAME --allow=tcp:8089 --target-tags NODENAME

The NODENAME can be obtained by running ‘kubectl get nodes’. The NODENAME is everything from ‘gke’ to ‘-pool’. Based on the below example, the correct NODENAME would be ‘gke-examplecluster-default-pool’ in the command above.

jjt@epictetus ~ kubectl get nodes NAME STATUS ROLES AGE VERSIONgke-examplecluster-default-pool-5833c0bb-636w Ready <none> 10m v1.7.8-gke.0

gke-examplecluster-default-pool-5833c0bb-7f3b Ready <none> 7m v1.7.8-gke.0

gke-examplecluster-default-pool-5833c0bb-cx17 Ready <none> 10m v1.7.8-gke.0

gke-examplecluster-default-pool-5833c0bb-lxrr Ready <none> 10m v1.7.8-gke.0

Access the locust UI by going to http://[IP from forwarding rule]:8089

Enter a number of hosts to simulate and a hatch rate (number of new hosts added per second) and start swarming. Depending on the tests, the number of workers, etc you should be able to get several hundred requests per second. 

Cleaning up

We don’t want this cluster sitting around when we aren’t using it. First, the instructions above leave the Locust UI wide open to the internet. While you can create firewall rules to limit access to the UI, I’ve found the behavior irregular. Sometimes it works, and sometimes it doesn’t. Second, You don’t want the cluster up and eating into your credits when not in use. Based on the tests I’ve run, the average cost for a 5 minute test is less than a cup of coffee. Cleaning up the cluster is pretty easy. First delete the cluster:

gcloud container clusters delete CLUSTER-NAMEgcloud compute firewall-rules delete FWRULENAME

gcloud compute forwarding-rules delete FORWARDING-RULE-NAME

To get the forwarding rule name, run ‘gcloud compute forwarding-rules list’.

Scripting the whole process

In order to be able to run this as a demo for Riverview, I’ve written a few scripts to be able to quickly start and shutdown a cluster on demand. I’m not a scripting expert there are are lot of ugly hacks at play here, feel free to improve them as you see fit.

Here is the main script, bbndemo.sh. Save it to the cloned git repository directory. Define the PROJECT, ZONE, and CLUSTER variables and then run it. It provides a pretty guided experience.

#!/bin/bash

#gcloud deployment script for locust demo webinar.

#Define some variables

PROJECT=baffinbaynetworks

CLUSTER=bbnloadtest

ZONE=europe-west2-a

echo "Project set to $PROJECT"

echo "Cluster set to $CLUSTER"

echo "Zone set to $ZONE"

echo Setting region

gcloud config set compute/zone $ZONE

gcloud config set project $PROJECT

gcloud container clusters create $CLUSTER --zone $ZONE --num-nodes 8 --machine-type n1-standard-2

echo "Cluster Created"

#gcloud container clusters list

kubectl config use-context gke_$PROJECT_$ZONE_$CLUSTER

cd kubernetes-config/

kubectl create -f locust-master-controller.yaml

sleep 30

while [ "$yn" != "y" ]; do

kubectl get pods -l name=locust,role=master

echo "Are the Master pods up? (y or n)"

read -n1 yn

done


echo "Deploying the locust Service"

kubectl create -f locust-master-service.yaml

while [ "$yn" != "y" ]; do

gcloud compute forwarding-rules list

echo "Are the rules showing up? (y or n)"

read -n1 yn

done


echo "How Many Worker replicas should we create? [Hit ENTER for 50]:"

read reps

#reps is the number of replicas to create

reps="${reps:=50}"

kubectl scale --replicas=$reps replicationcontrollers locust-worker

echo scale command entered.

sleep 5

echo "This usually takes a while. We'll check the pods in a bit"


gcloud compute firewall-rules create locustui --allow=tcp:8089 --target-tags gke-$CLUSTER-default-pool


sleep 20

while [ "$yn" != "y" ]; do

echo "Are all the worker pods up? (y or n)"

read -n1 yn

done

kubectl get pods -l name=locust,role=worker

echo "next step fails unless you get the fw rules up...."

while [ "$yn" != "y" ]; do

gcloud compute forwarding-rules list

echo "Did that show a rule? (y or n)"

read -n1 yn

done

sleep 10

#this var gets the locust ip to open the UI

LOCIP="$(gcloud compute forwarding-rules list | awk 'FNR == 2 {print $3}')"

echo "Locust Management URL:"

echo "http://$LOCIP:8089"

#Pretty sure it will only open on MacOS

open "http://$LOCIP:8089"

sleep 60


read -p "Press any key to clean up and delete the cluster... " -n1 -s


gcloud -q container clusters delete $CLUSTER

gcloud -q compute firewall-rules delete locustui

#Getting the firewall ID so we dont have to look it up ourselves

FWRULE="$(gcloud compute forwarding-rules list | awk 'FNR == 2 {print $1}')"

gcloud -q compute forwarding-rules delete $FWRULE --region $ZONE


echo "Everything shutdown now"

exit

If you want to test from different locations, you can change the ZONE variable. I have saved demo-US.sh, demo-ASIA.sh, etc so i can just start a cluster in whatever region Is needed.

Possible Improvements

While the script above works for my current needs, there are a few issues with it. First, if the script is killed before the cleanup runs, the forwarding rule doesn’t get deleted. This results in multiple forwarding rules and breaks the UI IP address lookup.

Second, I’d like to figure out how to open a multi-region locust cluster, with the ability to launch workers globally in order to demonstrate Riverview’s dynamic threat chart feature. Similarly, I’d like to add the capability to also launch exploit attempts against the website to demonstrate the the Threat Inspection module.

I also plan to use a similar methodology deploy another load testing framework, such as Apache JMeter.

Conclusion

I hope that you found this series of blog posts useful, and can start to do some testing on your own site. Security is not a passive pursuit, you need to be proactive in tracking down the weak points in your defenses. Sometimes you need to test your existing defenses to make sure they work as expected. If these instructions can help you to do any of this I’ve achieved my goal. And of course, if you want to see it in action with Riverview get in touch with us at www.baffinbaynetworks.com

Follow us on LinkedIn

Topics

  • Computer security

Categories

  • ddos
  • cyber security

Contacts

Joakim Sundberg

Press contact CEO

James Tucker

Press contact Director, System Engineering

Related content