|
1 | 1 | Confluent's Apache Kafka client for Golang
|
2 | 2 | ==========================================
|
3 | 3 |
|
4 |
| -**WARNING: This client is in initial development, NOT FOR PRODUCTION USE** |
5 |
| - |
6 | 4 | Confluent's Kafka client for Golang wraps the librdkafka C library, providing
|
7 | 5 | full Kafka protocol support with great performance and reliability.
|
8 | 6 |
|
9 | 7 | The Golang bindings provides a high-level Producer and Consumer with support
|
10 |
| -for the balanced consumer groups of Apache Kafka 0.9. |
11 |
| - |
12 |
| -See the [API documentation: FIXME]() |
| 8 | +for the balanced consumer groups of Apache Kafka 0.9 and above. |
13 | 9 |
|
14 | 10 | **License**: [Apache License v2.0](http://www.apache.org/licenses/LICENSE-2.0)
|
15 | 11 |
|
16 | 12 |
|
17 |
| -Early preview information |
18 |
| -========================= |
19 |
| - |
20 |
| -The Go client is currently under heavy initial development and is not |
21 |
| -ready for production use. APIs are not to be considered stable. |
22 |
| - |
23 |
| -As an excercise for early birds the Go client currently provides |
24 |
| -a number of possibly competing interfaces to various functionality. |
25 |
| -Your feedback is highly valuable to us on which APIs that should go into |
26 |
| -the final client. |
| 13 | +Beta information |
| 14 | +================ |
| 15 | +The Go client is currently in beta and APIs are subject to (minor) change. |
27 | 16 |
|
28 |
| -There are two main strands: channel based or function based. |
| 17 | +API strands |
| 18 | +=========== |
| 19 | +There are two main API strands: channel based or function based. |
29 | 20 |
|
30 | 21 | Channel based consumer
|
31 | 22 | ----------------------
|
|
41 | 32 |
|
42 | 33 | Cons:
|
43 | 34 |
|
44 |
| - * ? |
| 35 | + * Outdated events and messages may be consumed due to the buffering nature |
| 36 | + of channels. The extent is limited, but not remedied, by the Events channel |
| 37 | + buffer size (`go.events.channel.size`). |
45 | 38 |
|
46 | 39 | See [examples/consumer_channel_example](examples/consumer_channel_example)
|
47 | 40 |
|
|
58 | 51 |
|
59 | 52 | Cons:
|
60 | 53 |
|
61 |
| - * Makes it harder to read from multiple channels, but a go-routine easily solves that. |
| 54 | + * Makes it harder to read from multiple channels, but a go-routine easily |
| 55 | + solves that (see Cons in channel based consumer above about outdated events). |
62 | 56 | * Slower than the channel consumer.
|
63 | 57 |
|
64 | 58 | See [examples/consumer_example](examples/consumer_example)
|
@@ -129,6 +123,32 @@ Build
|
129 | 123 | $ go install
|
130 | 124 |
|
131 | 125 |
|
| 126 | +Static builds |
| 127 | +============= |
| 128 | + |
| 129 | +**NOTE**: Requires pkg-config |
| 130 | + |
| 131 | +To link your application statically with librdkafka append `-tags static` to |
| 132 | +your application's `go build` command, e.g.: |
| 133 | + |
| 134 | + $ cd kafkatest/go_verifiable_consumer |
| 135 | + $ go build -tags static |
| 136 | + |
| 137 | +This will create a binary with librdkafka statically linked, do note however |
| 138 | +that any librdkafka dependencies (such as ssl, sasl2, lz4, etc, depending |
| 139 | +on librdkafka build configuration) will be linked dynamically and thus required |
| 140 | +on the target system. |
| 141 | + |
| 142 | +To create a completely static binary append `-tags static_all` instead. |
| 143 | +This requires all dependencies to be available as static libraries |
| 144 | +(e.g., libsasl2.a). Static libraries are typically not installed |
| 145 | +by default but are available in the corresponding `..-dev` or `..-devel` |
| 146 | +packages (e.g., libsasl2-dev). |
| 147 | + |
| 148 | +After a succesful static build verify the dependencies by running |
| 149 | +`ldd ./your_program`, librdkafka should not be listed. |
| 150 | + |
| 151 | + |
132 | 152 |
|
133 | 153 | Tests
|
134 | 154 | =====
|
@@ -159,105 +179,3 @@ in `$GOPATH/src/github.com/confluentinc/confluent-kafka-go`:
|
159 | 179 | cd kafka
|
160 | 180 | go install
|
161 | 181 |
|
162 |
| - |
163 |
| -High-level consumer |
164 |
| -------------------- |
165 |
| - |
166 |
| - * Decide if you want to read messages and events from the `.Events()` channel |
167 |
| - (set `"go.events.channel.enable": true`) or by calling `.Poll()`. |
168 |
| - |
169 |
| - * Create a Consumer with `kafka.NewConsumer()` providing at |
170 |
| - least the `bootstrap.servers` and `group.id` configuration properties. |
171 |
| - |
172 |
| - * Call `.Subscribe()` or (`.SubscribeTopics()` to subscribe to multiple topics) |
173 |
| - to join the group with the specified subscription set. |
174 |
| - Subscriptions are atomic, calling `.Subscribe*()` again will leave |
175 |
| - the group and rejoin with the new set of topics. |
176 |
| - |
177 |
| - * Start reading events and messages from either the `.Events` channel |
178 |
| - or by calling `.Poll()`. |
179 |
| - |
180 |
| - * When the group has rebalanced each client member is assigned a |
181 |
| - (sub-)set of topic+partitions. |
182 |
| - By default the consumer will start fetching messages for its assigned |
183 |
| - partitions at this point, but your application may enable rebalance |
184 |
| - events to get an insight into what the assigned partitions where |
185 |
| - as well as set the initial offsets. To do this you need to pass |
186 |
| - `"go.application.rebalance.enable": true` to the `NewConsumer()` call |
187 |
| - mentioned above. You will (eventually) see a `kafka.AssignedPartitions` event |
188 |
| - with the assigned partition set. You can optionally modify the initial |
189 |
| - offsets (they'll default to stored offsets and if there are no previously stored |
190 |
| - offsets it will fall back to `"default.topic.config": {"auto.offset.reset": ..}` |
191 |
| - which defaults to the `latest` message) and then call `.Assign(partitions)` |
192 |
| - to start consuming. If you don't need to modify the initial offsets you will |
193 |
| - not need to call `.Assign()`, the client will do so automatically for you if |
194 |
| - you dont. |
195 |
| - |
196 |
| - * As messages are fetched they will be made available on either the |
197 |
| - `.Events` channel or by calling `.Poll()`, look for event type `*kafka.Message`. |
198 |
| - |
199 |
| - * Handle messages, events and errors to your liking. |
200 |
| - |
201 |
| - * When you are done consuming call `.Close()` to commit final offsets |
202 |
| - and leave the consumer group. |
203 |
| - |
204 |
| - |
205 |
| - |
206 |
| -Producer |
207 |
| --------- |
208 |
| - |
209 |
| - * Create a Producer with `kafka.NewProducer()` providing at least |
210 |
| - the `bootstrap.servers` configuration properties. |
211 |
| - |
212 |
| - * Messages may now be produced either by sending a `*kafka.Message` |
213 |
| - on the `.ProduceChannel` or by calling `.Produce()`. |
214 |
| - |
215 |
| - * Producing is an asynchronous operation so the client notifies the application |
216 |
| - of per-message produce success or failure through something called delivery reports. |
217 |
| - Delivery reports are by default emitted on the `.Events` channel as `*kafka.Message` |
218 |
| - and you should check `msg.TopicPartition.Error` for `nil` to find out if the message |
219 |
| - was succesfully delivered or not. |
220 |
| - It is also possible to direct delivery reports to alternate channels |
221 |
| - by providing a non-nil `chan Event` channel to `.Produce()`. |
222 |
| - If no delivery reports are wanted they can be completely disabled by |
223 |
| - setting configuration property `"go.delivery.reports": false`. |
224 |
| - |
225 |
| - * When you are done producing messages you will need to make sure all messages |
226 |
| - are indeed delivered to the broker (or failed), remember that this is |
227 |
| - an asynchronous client so some of your messages may be lingering in internal |
228 |
| - channels or tranmission queues. |
229 |
| - To do this you can either keep track of the messages you've produced |
230 |
| - and wait for their corresponding delivery reports, or call the convenience |
231 |
| - function `.Flush()` that will block until all message deliveries are done |
232 |
| - or the provided timeout elapses. |
233 |
| - |
234 |
| - * Finally call `.Close()` to decommission the producer. |
235 |
| - |
236 |
| - |
237 |
| -Events |
238 |
| ------- |
239 |
| - |
240 |
| -Apart from emitting messages and delivery reports the client also communicates |
241 |
| -with the application through a number of different event types. |
242 |
| -An application may choose to handle or ignore these events. |
243 |
| - |
244 |
| -**Consumer events**: |
245 |
| - * `*kafka.Message` - a fetched message. |
246 |
| - * `AssignedPartitions` - The assigned partition set for this client following a rebalance. |
247 |
| - Requires `go.application.rebalance.enable` |
248 |
| - * `RevokedPartitions` - The counter part to `AssignedPartitions` following a rebalance. |
249 |
| - `AssignedPartitions` and `RevokedPartitions` are symetrical. |
250 |
| - Requires `go.application.rebalance.enable` |
251 |
| - * `PartitionEof` - Consumer has reached the end of a partition. |
252 |
| - NOTE: The consumer keeps trying to fetch new messages for the partition. |
253 |
| - |
254 |
| -**Producer events**: |
255 |
| - * `*kafka.Message` - delivery report for produced message. |
256 |
| - Check `.TopicPartition.Error` for delivery result. |
257 |
| - |
258 |
| -**Generic events** for both Consumer and Producer: |
259 |
| - * `KafkaError` - client (error codes are prefixed with _) or broker error. |
260 |
| - These errors are normally just informational since the |
261 |
| - client will try its best to automatically recover (eventually). |
262 |
| - |
263 |
| -See the [examples](examples) directory for example implementations of the above. |
0 commit comments