Skip to content

Commit 5072190

Browse files
authored
feat: Add support for stream response (influxdata#30)
1 parent bc2813b commit 5072190

File tree

6 files changed

+263
-53
lines changed

6 files changed

+263
-53
lines changed

CHANGELOG.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@
33
### Features
44
1. [#24](https://github.com/influxdata/influxdb-client-python/issues/24): Added possibility to write dictionary-style object
55
1. [#27](https://github.com/influxdata/influxdb-client-python/issues/27): Added possibility to write bytes type of data
6+
1. [#30](https://github.com/influxdata/influxdb-client-python/issues/30): Added support for streaming a query response
67
1. [#31](https://github.com/influxdata/influxdb-client-python/issues/31): Added support for delete metrics
78

89
### API

README.rst

Lines changed: 93 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -44,21 +44,24 @@ InfluxDB 2.0 client features
4444
- Querying data
4545
- using the Flux language
4646
- into csv, raw data, `flux_table <https://github.com/influxdata/influxdb-client-python/blob/master/influxdb_client/client/flux_table.py#L5>`_ structure
47+
- `How to queries <#queries>`_
4748
- Writing data using
4849
- `Line Protocol <https://docs.influxdata.com/influxdb/v1.6/write_protocols/line_protocol_tutorial>`_
4950
- `Data Point <https://github.com/influxdata/influxdb-client-python/blob/master/influxdb_client/client/write/point.py#L16>`__
5051
- `RxPY <https://rxpy.readthedocs.io/en/latest/>`_ Observable
5152
- Not implemented yet
52-
- write user types using decorator
53-
- write Pandas DataFrame
53+
- write user types using decorator
54+
- write Pandas DataFrame
55+
- `How to writes <#writes>`_
5456
- `InfluxDB 2.0 API <https://github.com/influxdata/influxdb/blob/master/http/swagger.yml>`_ client for management
5557
- the client is generated from the `swagger <https://github.com/influxdata/influxdb/blob/master/http/swagger.yml>`_ by using the `openapi-generator <https://github.com/OpenAPITools/openapi-generator>`_
5658
- organizations & users management
5759
- buckets management
5860
- tasks management
5961
- authorizations
6062
- health check
61-
- How To
63+
- ...
64+
- Examples
6265
- `Connect to InfluxDB Cloud`_
6366
- `How to efficiently import large dataset`_
6467
- `Efficiency write data from IOT sensor`_
@@ -79,7 +82,7 @@ The python package is hosted on Github, you can install latest version directly:
7982

8083
.. code-block:: sh
8184
82-
pip3 install git+https://github.com/influxdata/influxdb-client-python.git
85+
pip install influxdb-client
8386
8487
Then import the package:
8588

@@ -201,9 +204,9 @@ The batching is configurable by ``write_options``\ :
201204
from influxdb_client.client.write_api import SYNCHRONOUS
202205
203206
_client = InfluxDBClient(url="http://localhost:9999", token="my-token", org="my-org")
204-
_write_client = _client.write_api(write_options=WriteOptions(batch_size=500,
205-
flush_interval=10_000,
206-
jitter_interval=2_000,
207+
_write_client = _client.write_api(write_options=WriteOptions(batch_size=500,
208+
flush_interval=10_000,
209+
jitter_interval=2_000,
207210
retry_interval=5_000))
208211
209212
"""
@@ -289,6 +292,89 @@ Data are writes in a synchronous HTTP request.
289292
290293
client.__del__()
291294
295+
Queries
296+
^^^^^^^
297+
298+
The result retrieved by `QueryApi <https://github.com/influxdata/influxdb-client-python/blob/master/influxdb_client/client/query_api.py>`_ could be formatted as a:
299+
300+
1. Flux data structure: `FluxTable <https://github.com/influxdata/influxdb-client-python/blob/master/influxdb_client/client/flux_table.py#L5>`_, `FluxColumn <https://github.com/influxdata/influxdb-client-python/blob/master/influxdb_client/client/flux_table.py#L22>`_ and `FluxRecord <https://github.com/influxdata/influxdb-client-python/blob/master/influxdb_client/client/flux_table.py#L31>`_
301+
2. `csv.reader <https://docs.python.org/3.4/library/csv.html#reader-objects>`__ which will iterate over CSV lines
302+
3. Raw unprocessed results as a ``str`` iterator
303+
304+
The API also support streaming ``FluxRecord`` via `query_stream <https://github.com/influxdata/influxdb-client-python/blob/master/influxdb_client/client/query_api.py#L77>`_, see example below:
305+
306+
.. code-block:: python
307+
308+
from influxdb_client import InfluxDBClient, Point, Dialect
309+
from influxdb_client.client.write_api import SYNCHRONOUS
310+
311+
client = InfluxDBClient(url="http://localhost:9999", token="my-token", org="my-org")
312+
313+
write_api = client.write_api(write_options=SYNCHRONOUS)
314+
query_api = client.query_api()
315+
316+
"""
317+
Prepare data
318+
"""
319+
320+
_point1 = Point("my_measurement").tag("location", "Prague").field("temperature", 25.3)
321+
_point2 = Point("my_measurement").tag("location", "New York").field("temperature", 24.3)
322+
323+
write_api.write(bucket="my-bucket", org="my-org", record=[_point1, _point2])
324+
325+
"""
326+
Query: using Table structure
327+
"""
328+
tables = query_api.query('from(bucket:"my-bucket") |> range(start: -10m)')
329+
330+
for table in tables:
331+
print(table)
332+
for record in table.records:
333+
print(record.values)
334+
335+
print()
336+
print()
337+
338+
"""
339+
Query: using Stream
340+
"""
341+
records = query_api.query_stream('from(bucket:"my-bucket") |> range(start: -10m)')
342+
343+
for record in records:
344+
print(f'Temperature in {record["location"]} is {record["_value"]}')
345+
346+
"""
347+
Interrupt a stream after retrieve a required data
348+
"""
349+
large_stream = query_api.query_stream('from(bucket:"my-bucket") |> range(start: -100d)')
350+
for record in large_stream:
351+
if record["location"] == "New York":
352+
print(f'New York temperature: {record["_value"]}')
353+
break
354+
355+
large_stream.close()
356+
357+
print()
358+
print()
359+
360+
"""
361+
Query: using csv library
362+
"""
363+
csv_result = query_api.query_csv('from(bucket:"my-bucket") |> range(start: -10m)',
364+
dialect=Dialect(header=False, delimiter=",", comment_prefix="#", annotations=[],
365+
date_time_format="RFC3339"))
366+
for csv_line in csv_result:
367+
if not len(csv_line) == 0:
368+
print(f'Temperature in {csv_line[9]} is {csv_line[6]}')
369+
370+
"""
371+
Close client
372+
"""
373+
client.__del__()
374+
375+
Examples
376+
^^^^^^^^
377+
292378
How to efficiently import large dataset
293379
"""""""""""""""""""""""""""""""""""""""
294380

examples/query.py

Lines changed: 53 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,26 +1,66 @@
1-
from influxdb_client import InfluxDBClient, Point
1+
from influxdb_client import InfluxDBClient, Point, Dialect
22
from influxdb_client.client.write_api import SYNCHRONOUS
33

44
client = InfluxDBClient(url="http://localhost:9999", token="my-token", org="my-org")
55

66
write_api = client.write_api(write_options=SYNCHRONOUS)
77
query_api = client.query_api()
88

9-
p = Point("my_measurement").tag("location", "Prague").field("temperature", 25.3)
9+
"""
10+
Prepare data
11+
"""
1012

11-
write_api.write(bucket="my-bucket", org="my-org", record=p)
13+
_point1 = Point("my_measurement").tag("location", "Prague").field("temperature", 25.3)
14+
_point2 = Point("my_measurement").tag("location", "New York").field("temperature", 24.3)
1215

13-
## using Table structure
16+
write_api.write(bucket="my-bucket", org="my-org", record=[_point1, _point2])
17+
18+
"""
19+
Query: using Table structure
20+
"""
1421
tables = query_api.query('from(bucket:"my-bucket") |> range(start: -10m)')
1522

1623
for table in tables:
1724
print(table)
18-
for row in table.records:
19-
print(row.values)
20-
21-
## using csv library
22-
csv_result = query_api.query_csv('from(bucket:"my-bucket") |> range(start: -10m)')
23-
val_count = 0
24-
for row in csv_result:
25-
for cell in row:
26-
val_count += 1
25+
for record in table.records:
26+
print(record.values)
27+
28+
print()
29+
print()
30+
31+
"""
32+
Query: using Stream
33+
"""
34+
records = query_api.query_stream('from(bucket:"my-bucket") |> range(start: -10m)')
35+
36+
for record in records:
37+
print(f'Temperature in {record["location"]} is {record["_value"]}')
38+
39+
"""
40+
Interrupt a stream after retrieve a required data
41+
"""
42+
large_stream = query_api.query_stream('from(bucket:"my-bucket") |> range(start: -100d)')
43+
for record in large_stream:
44+
if record["location"] == "New York":
45+
print(f'New York temperature: {record["_value"]}')
46+
break
47+
48+
large_stream.close()
49+
50+
print()
51+
print()
52+
53+
"""
54+
Query: using csv library
55+
"""
56+
csv_result = query_api.query_csv('from(bucket:"my-bucket") |> range(start: -10m)',
57+
dialect=Dialect(header=False, delimiter=",", comment_prefix="#", annotations=[],
58+
date_time_format="RFC3339"))
59+
for csv_line in csv_result:
60+
if not len(csv_line) == 0:
61+
print(f'Temperature in {csv_line[9]} is {csv_line[6]}')
62+
63+
"""
64+
Close client
65+
"""
66+
client.__del__()

influxdb_client/client/flux_csv_parser.py

Lines changed: 29 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,8 @@
22
import codecs
33
import csv as csv_parser
44

5-
from dateutil.parser import parse as timestamp_parser
65
import ciso8601
6+
from urllib3 import HTTPResponse
77

88
from influxdb_client.client.flux_table import FluxTable, FluxColumn, FluxRecord
99

@@ -20,21 +20,32 @@ class FluxCsvParserException(Exception):
2020

2121
class FluxCsvParser(object):
2222

23-
def __init__(self) -> None:
23+
def __init__(self, response: HTTPResponse, stream: bool) -> None:
24+
self._response = response
25+
self.tables = []
26+
self._stream = stream
2427
pass
2528

26-
def parse_flux_response(self, response, cancellable, consumer):
29+
def __enter__(self):
30+
self._reader = csv_parser.reader(codecs.iterdecode(self._response, 'utf-8'))
31+
return self
32+
33+
def __exit__(self, exc_type, exc_val, exc_tb):
34+
self._response.close()
35+
36+
def generator(self):
37+
with self as parser:
38+
yield from parser._parse_flux_response()
39+
40+
def _parse_flux_response(self):
2741
table_index = 0
2842
start_new_table = False
2943
table = None
3044
parsing_state_error = False
31-
reader = csv_parser.reader(codecs.iterdecode(response, 'utf-8'))
3245

33-
for csv in reader:
46+
for csv in self._reader:
3447
# debug
3548
# print("parsing: ", csv)
36-
if (cancellable is not None) and cancellable.canceled:
37-
return
3849

3950
# Response has HTTP status ok, but response is error.
4051
if len(csv) < 1:
@@ -55,7 +66,7 @@ def parse_flux_response(self, response, cancellable, consumer):
5566
if "#datatype" == token:
5667
start_new_table = True
5768
table = FluxTable()
58-
consumer.accept_table(index=table_index, cancellable=cancellable, flux_table=table)
69+
self._insert_table(table, table_index)
5970
table_index = table_index + 1
6071
elif table is None:
6172
raise FluxCsvParserException("Unable to parse CSV response. FluxTable definition was not found.")
@@ -85,11 +96,16 @@ def parse_flux_response(self, response, cancellable, consumer):
8596
flux_columns = table.columns
8697
table = FluxTable()
8798
table.columns.extend(flux_columns)
88-
consumer.accept_table(table_index, cancellable, table)
99+
self._insert_table(table, table_index)
89100
table_index = table_index + 1
90101

91102
flux_record = self.parse_record(table_index - 1, table, csv)
92-
consumer.accept_record(table_index - 1, cancellable, flux_record)
103+
104+
if not self._stream:
105+
self.tables[table_index - 1].records.append(flux_record)
106+
107+
yield flux_record
108+
93109
# debug
94110
# print(flux_record)
95111

@@ -163,14 +179,6 @@ def add_column_names_and_tags(table, csv):
163179
column.label = csv[i]
164180
i += 1
165181

166-
167-
class FluxResponseConsumerTable:
168-
169-
def __init__(self) -> None:
170-
self.tables = []
171-
172-
def accept_table(self, index, cancellable, flux_table):
173-
self.tables.insert(index, flux_table)
174-
175-
def accept_record(self, index, cancellable, flux_record):
176-
self.tables[index].records.append(flux_record)
182+
def _insert_table(self, table, table_index):
183+
if not self._stream:
184+
self.tables.insert(table_index, table)

0 commit comments

Comments
 (0)
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy