20

I am using java and writing into InfluxDb using batch points. My code is mention below,

 BatchPoints batchPoints = BatchPoints
.database(dbName).retentionPolicy("autogen") .consistency(InfluxDB.ConsistencyLevel.ALL).build();


    point = Point.measurement("cpu")...

    batchPoints.point(point);

I am writing 20 to 30 Million points, and after a while getting exception:

.java.lang.RuntimeException: {"error":"partial write: max-values-per-tag limit exceeded (100708/100000): measurement=\"cpu\" tag=\"jkey\" value=\ .....

Wondering how to increase the limit? Or do i need to change my schema design?

Ammad
  • 3,225
  • 5
  • 29
  • 52

1 Answers1

24

I found the solution so pasting here, open influxdb.conf file usually located at /etc/influxdb/influxdb.conf and search for:

# max-values-per-tag = 100000

uncomment and replace the value to zero as shown below,

max-values-per-tag = 0

And bounce the influxDb instance for changes to take effect.

Ammad
  • 3,225
  • 5
  • 29
  • 52
  • I have found that too. My concern is: isn't 1M rows not that much for a time-series DB? I believe that 1M row shouldn't be a problem and, nevertheless, this limit makes me think the opposite ... – Lucas Aimaretto Jan 13 '18 at 22:43
  • Yes, by default values allowed per-tag is pretty low. It should be at least 10 times of it. – Ammad Jan 16 '18 at 20:55
  • 1
    Adding link to more info: https://docs.influxdata.com/influxdb/v1.5/administration/config/#max-values-per-tag-100000 – silentpete Apr 06 '18 at 15:13
  • 8
    @Ammad actually 10000 is already a quite big value for this parameter. It's about unique values for a given tag key. Each such unique value produces new series (one can treat them as files on the disk). Having 10000 separate files is already a bit strange. But to have a 1e6 files is unreasonable. – Ivan Velichko Apr 11 '18 at 15:09
  • 2
    This may be a "solution" but unless you really know what you are doing, chances are your InfluxDB schema is using tags in a non-optimal way. In general, tags should have few unique values. Further reference: https://docs.influxdata.com/influxdb/v1.7/concepts/schema_and_data_layout/ – Gustavo Bezerra Apr 23 '19 at 03:44
  • @GustavoBezerra : Why not, I have vehicle id as tag key which can go in millions. I use id as tag because I generally have to filter via vehicle – Nipun Jun 24 '20 at 04:39
  • @Nipun: Sure, that sounds like a legit use case. But I have seen people using tags for things that really weren't tags at all, therefore my warning. With indexes not in memory anymore (using TSI) unique tag values in the millions should be ok. – Gustavo Bezerra Jun 25 '20 at 05:12
  • It's the cardinality level and the main attribution to it is that the level of cardinality is upper bound, so there's some limit to it. For the rest, the cardinality level affects the size and efficiency of the index and storage compactness. But again, InfluxDB boldly claims to be able to ingest millions of data points per second. I can hardly imagine that it might come from 100000 devices if you use device id as a tag. – Alexey Zimarev Oct 09 '20 at 19:10