InfluxDB 2.0 sometimes uses inmem index instead of TSI #20257
Labels
area/compat-v1x
v1.x compatibility related work in v2.x
area/2.x
OSS 2.0 related issues and PRs
kind/bug
Steps to reproduce:
we ran into this issue while trying to do something else, I have not attempted to create a minimum set of steps
Expected behavior:
Data to be ingested as normal
Actual behavior:
We are inserting with many different threads, and occasionally some threads get stuck with the following behaviour.
{"code":"internal error","message":"unexpected error writing points to database: partial write: max-series-per-database limit exceeded: (1000000) dropped=5"}
when writing to InfluxDBEnvironment info:
System info: Run
uname -srm
and copy the output hereLinux 5.4.0-1029-aws x86_64
InfluxDB version: Run
influxd version
and copy the output hereInfluxDB 2.0.2 (git: 84496e507a) build_date: 2020-11-19T03:59:35Z
Other relevant environment details: Container runtime, disk info, etc
This max-series-per-database setting is not documented for InfluxDB v2 and only exists for the inmem index type which we don’t use (use tsi1 on InfluxDB v1.8, there is no option to change the index type on InfluxDB v2.0 afaik)
Seems like we get a message like this when InfluxDB opens a new shard or is restarted:
(here are those counts over time.. the tsi1_count does seem to be increasing, while the inmem_count decreased after a restart)
some notes from our engineer investigating this:
Config:
Logs:
see above
Performance:
n/a
The text was updated successfully, but these errors were encountered: