Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add limits when decompressing gzipped write requests #13100

Closed
docmerlin opened this issue Apr 2, 2019 · 0 comments · Fixed by #16469
Closed

Add limits when decompressing gzipped write requests #13100

docmerlin opened this issue Apr 2, 2019 · 0 comments · Fixed by #16469

Comments

@docmerlin
Copy link
Contributor

docmerlin commented Apr 2, 2019

Why

It is possible to construct a gzipped request that can expand to a size that exceeds a reasonable limit, resulting in a denial of service by potentially crashing the server.

The following reads the entire contents of the io.Reader into a []byte

data, err := ioutil.ReadAll(in)

How

The amount of data read from the source io.Reader should be limited, which can be achieved by using a LimitReader.

The default for OSS should remain as unlimited. At some future point, influxd will support external configuration (e.g. a .toml file), where it should be available as an option.

The limit should be configurable to the http package such that Cloud 2 services may override the default.

@stuartcarnie stuartcarnie changed the title accepting gzip Add a reasonable limit when decompressing gzipped write requests Jan 8, 2020
@stuartcarnie stuartcarnie changed the title Add a reasonable limit when decompressing gzipped write requests Add limits when decompressing gzipped write requests Jan 8, 2020
@GeorgeMac GeorgeMac self-assigned this Jan 9, 2020
@GeorgeMac GeorgeMac reopened this Jan 10, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants