Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Apply per request rate limiting to nginx #1029

Merged
merged 2 commits into from
Nov 8, 2024

Conversation

mitchdawson1982
Copy link
Collaborator

@mitchdawson1982 mitchdawson1982 commented Nov 8, 2024

Having reviewed the settings available and In discussion with @jemnery we have opted for a value of 200 requests per second at this stage. It is important to note that coming up with an exact method of restriction and associated values is challenging. We want to mitigate the opportunities for DDOS whilst ensuring that genuine users of the service can do so unimpeded.

DDOS attacks can come from a single or smaller number of machines, and IP's generating larger numbers of requests, as well as from larger numbers of machines and IP's with smaller numbers of requests.

If we had a more reliable VPN service, I would prefer opting to implement an updated whitelist of MOJ IP addresses and restrict outside access. However, the developer experience on GlobalProtect is often poor, and I would envisage the others can be similarly problematic at times.

It is important to recognise that many users of this service will do so via a VPN, which means that there will be a higher number of requests coming from a single IP address(s) from nginx perspective, so restricting the number of requests per second too far would make this problematic.

Nginx utilises the leaky bucket algorithm. The analogy is with a bucket where water is poured in at the top and leaks from the bottom; if the rate at which water is poured in exceeds the rate at which it leaks, the bucket overflows. In terms of request processing, the water represents requests from clients, and the bucket represents a queue where requests wait to be processed according to a first‑in‑first‑out (FIFO) scheduling algorithm. The leaking water represents requests exiting the buffer for processing by the server, and the overflow represents requests that are discarded and never serviced.

It uses a burst multiplier which by default is set to 5. This effectively creates a bucket of the size rps * burst multiplier in this case (1000). Request 1001 would be dropped.

@mitchdawson1982 mitchdawson1982 requested a review from a team as a code owner November 8, 2024 14:18
Copy link
Contributor

@murdo-moj murdo-moj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good!

@mitchdawson1982 mitchdawson1982 merged commit 9d5b111 into main Nov 8, 2024
22 checks passed
@mitchdawson1982 mitchdawson1982 deleted the fmd-808-nginx-rate-limiting branch November 8, 2024 14:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants