You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Oct 29, 2024. It is now read-only.
I read several discussions in the Issues tracker and on Stack Overflow on why InfluxDB doesn't support vanilla SQL NULLs and I'm perfectly alright with that. However, inserting empty strings (""), the accepted equivalent of NULLs, fails on large datapoint batches. Example:
extract some 400-500 datapoints into individual Python dicts with the mandatory "time", "measurement", "fields" and "tags" keys each
batch everything into a JSON and call InfluxDBClient.write_points(that_json)
get influxdb.exceptions.InfluxDBClientError: 400: {"error":"partial write: unable to parse 'measurement,id=12,interval=300,type=meteostation 1514036400000000000': invalid field format dropped=0"} on large, but not small JSON batches.
I looked very closely at the JSON and individual dicts and they are perfectly fine.
EDIT: I use InfluxDB 1.4.2 and the influxdb Python module 4.1.1 (updating to 5.0.0 did not solve the problem).
EDIT2: After additional tests it turned out that this is equivalent to issue #369, which didn't provide the related error output originally. The reason seems to be an extra space (" ") in the line generated by line_protocol.make_lines instead of the empty "fields" dict.
EDIT3: Checking whether "fields" is empty doesn't actually help, because apparently the InfluxDB Line Protocol requires at least one "fields" entry. This again argues in favor of retaining NULL values when a NULL is a sign of a device not logging data, yet actually trying to send it.