Apply per request rate limiting to nginx #1029
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Having reviewed the settings available and In discussion with @jemnery we have opted for a value of 200 requests per second at this stage. It is important to note that coming up with an exact method of restriction and associated values is challenging. We want to mitigate the opportunities for DDOS whilst ensuring that genuine users of the service can do so unimpeded.
DDOS attacks can come from a single or smaller number of machines, and IP's generating larger numbers of requests, as well as from larger numbers of machines and IP's with smaller numbers of requests.
If we had a more reliable VPN service, I would prefer opting to implement an updated whitelist of MOJ IP addresses and restrict outside access. However, the developer experience on GlobalProtect is often poor, and I would envisage the others can be similarly problematic at times.
It is important to recognise that many users of this service will do so via a VPN, which means that there will be a higher number of requests coming from a single IP address(s) from nginx perspective, so restricting the number of requests per second too far would make this problematic.
Nginx utilises the leaky bucket algorithm. The analogy is with a bucket where water is poured in at the top and leaks from the bottom; if the rate at which water is poured in exceeds the rate at which it leaks, the bucket overflows. In terms of request processing, the water represents requests from clients, and the bucket represents a queue where requests wait to be processed according to a first‑in‑first‑out (FIFO) scheduling algorithm. The leaking water represents requests exiting the buffer for processing by the server, and the overflow represents requests that are discarded and never serviced.
It uses a burst multiplier which by default is set to 5. This effectively creates a bucket of the size
rps * burst multiplier
in this case (1000). Request 1001 would be dropped.