Skip to content

Commit 53f5291

Browse files
authored
Merge pull request confluentinc#3 from edenhill/kafkatesting
kafkatest support, golint, and more
2 parents a08cf87 + 523bf2f commit 53f5291

39 files changed

+2247
-1021
lines changed

README.md

Lines changed: 38 additions & 120 deletions
Original file line numberDiff line numberDiff line change
@@ -1,31 +1,22 @@
11
Confluent's Apache Kafka client for Golang
22
==========================================
33

4-
**WARNING: This client is in initial development, NOT FOR PRODUCTION USE**
5-
64
Confluent's Kafka client for Golang wraps the librdkafka C library, providing
75
full Kafka protocol support with great performance and reliability.
86

97
The Golang bindings provides a high-level Producer and Consumer with support
10-
for the balanced consumer groups of Apache Kafka 0.9.
11-
12-
See the [API documentation: FIXME]()
8+
for the balanced consumer groups of Apache Kafka 0.9 and above.
139

1410
**License**: [Apache License v2.0](http://www.apache.org/licenses/LICENSE-2.0)
1511

1612

17-
Early preview information
18-
=========================
19-
20-
The Go client is currently under heavy initial development and is not
21-
ready for production use. APIs are not to be considered stable.
22-
23-
As an excercise for early birds the Go client currently provides
24-
a number of possibly competing interfaces to various functionality.
25-
Your feedback is highly valuable to us on which APIs that should go into
26-
the final client.
13+
Beta information
14+
================
15+
The Go client is currently in beta and APIs are subject to (minor) change.
2716

28-
There are two main strands: channel based or function based.
17+
API strands
18+
===========
19+
There are two main API strands: channel based or function based.
2920

3021
Channel based consumer
3122
----------------------
@@ -41,7 +32,9 @@ Pros:
4132

4233
Cons:
4334

44-
* ?
35+
* Outdated events and messages may be consumed due to the buffering nature
36+
of channels. The extent is limited, but not remedied, by the Events channel
37+
buffer size (`go.events.channel.size`).
4538

4639
See [examples/consumer_channel_example](examples/consumer_channel_example)
4740

@@ -58,7 +51,8 @@ Pros:
5851

5952
Cons:
6053

61-
* Makes it harder to read from multiple channels, but a go-routine easily solves that.
54+
* Makes it harder to read from multiple channels, but a go-routine easily
55+
solves that (see Cons in channel based consumer above about outdated events).
6256
* Slower than the channel consumer.
6357

6458
See [examples/consumer_example](examples/consumer_example)
@@ -129,6 +123,32 @@ Build
129123
$ go install
130124

131125

126+
Static builds
127+
=============
128+
129+
**NOTE**: Requires pkg-config
130+
131+
To link your application statically with librdkafka append `-tags static` to
132+
your application's `go build` command, e.g.:
133+
134+
$ cd kafkatest/go_verifiable_consumer
135+
$ go build -tags static
136+
137+
This will create a binary with librdkafka statically linked, do note however
138+
that any librdkafka dependencies (such as ssl, sasl2, lz4, etc, depending
139+
on librdkafka build configuration) will be linked dynamically and thus required
140+
on the target system.
141+
142+
To create a completely static binary append `-tags static_all` instead.
143+
This requires all dependencies to be available as static libraries
144+
(e.g., libsasl2.a). Static libraries are typically not installed
145+
by default but are available in the corresponding `..-dev` or `..-devel`
146+
packages (e.g., libsasl2-dev).
147+
148+
After a succesful static build verify the dependencies by running
149+
`ldd ./your_program`, librdkafka should not be listed.
150+
151+
132152

133153
Tests
134154
=====
@@ -159,105 +179,3 @@ in `$GOPATH/src/github.com/confluentinc/confluent-kafka-go`:
159179
cd kafka
160180
go install
161181

162-
163-
High-level consumer
164-
-------------------
165-
166-
* Decide if you want to read messages and events from the `.Events()` channel
167-
(set `"go.events.channel.enable": true`) or by calling `.Poll()`.
168-
169-
* Create a Consumer with `kafka.NewConsumer()` providing at
170-
least the `bootstrap.servers` and `group.id` configuration properties.
171-
172-
* Call `.Subscribe()` or (`.SubscribeTopics()` to subscribe to multiple topics)
173-
to join the group with the specified subscription set.
174-
Subscriptions are atomic, calling `.Subscribe*()` again will leave
175-
the group and rejoin with the new set of topics.
176-
177-
* Start reading events and messages from either the `.Events` channel
178-
or by calling `.Poll()`.
179-
180-
* When the group has rebalanced each client member is assigned a
181-
(sub-)set of topic+partitions.
182-
By default the consumer will start fetching messages for its assigned
183-
partitions at this point, but your application may enable rebalance
184-
events to get an insight into what the assigned partitions where
185-
as well as set the initial offsets. To do this you need to pass
186-
`"go.application.rebalance.enable": true` to the `NewConsumer()` call
187-
mentioned above. You will (eventually) see a `kafka.AssignedPartitions` event
188-
with the assigned partition set. You can optionally modify the initial
189-
offsets (they'll default to stored offsets and if there are no previously stored
190-
offsets it will fall back to `"default.topic.config": {"auto.offset.reset": ..}`
191-
which defaults to the `latest` message) and then call `.Assign(partitions)`
192-
to start consuming. If you don't need to modify the initial offsets you will
193-
not need to call `.Assign()`, the client will do so automatically for you if
194-
you dont.
195-
196-
* As messages are fetched they will be made available on either the
197-
`.Events` channel or by calling `.Poll()`, look for event type `*kafka.Message`.
198-
199-
* Handle messages, events and errors to your liking.
200-
201-
* When you are done consuming call `.Close()` to commit final offsets
202-
and leave the consumer group.
203-
204-
205-
206-
Producer
207-
--------
208-
209-
* Create a Producer with `kafka.NewProducer()` providing at least
210-
the `bootstrap.servers` configuration properties.
211-
212-
* Messages may now be produced either by sending a `*kafka.Message`
213-
on the `.ProduceChannel` or by calling `.Produce()`.
214-
215-
* Producing is an asynchronous operation so the client notifies the application
216-
of per-message produce success or failure through something called delivery reports.
217-
Delivery reports are by default emitted on the `.Events` channel as `*kafka.Message`
218-
and you should check `msg.TopicPartition.Error` for `nil` to find out if the message
219-
was succesfully delivered or not.
220-
It is also possible to direct delivery reports to alternate channels
221-
by providing a non-nil `chan Event` channel to `.Produce()`.
222-
If no delivery reports are wanted they can be completely disabled by
223-
setting configuration property `"go.delivery.reports": false`.
224-
225-
* When you are done producing messages you will need to make sure all messages
226-
are indeed delivered to the broker (or failed), remember that this is
227-
an asynchronous client so some of your messages may be lingering in internal
228-
channels or tranmission queues.
229-
To do this you can either keep track of the messages you've produced
230-
and wait for their corresponding delivery reports, or call the convenience
231-
function `.Flush()` that will block until all message deliveries are done
232-
or the provided timeout elapses.
233-
234-
* Finally call `.Close()` to decommission the producer.
235-
236-
237-
Events
238-
------
239-
240-
Apart from emitting messages and delivery reports the client also communicates
241-
with the application through a number of different event types.
242-
An application may choose to handle or ignore these events.
243-
244-
**Consumer events**:
245-
* `*kafka.Message` - a fetched message.
246-
* `AssignedPartitions` - The assigned partition set for this client following a rebalance.
247-
Requires `go.application.rebalance.enable`
248-
* `RevokedPartitions` - The counter part to `AssignedPartitions` following a rebalance.
249-
`AssignedPartitions` and `RevokedPartitions` are symetrical.
250-
Requires `go.application.rebalance.enable`
251-
* `PartitionEof` - Consumer has reached the end of a partition.
252-
NOTE: The consumer keeps trying to fetch new messages for the partition.
253-
254-
**Producer events**:
255-
* `*kafka.Message` - delivery report for produced message.
256-
Check `.TopicPartition.Error` for delivery result.
257-
258-
**Generic events** for both Consumer and Producer:
259-
* `KafkaError` - client (error codes are prefixed with _) or broker error.
260-
These errors are normally just informational since the
261-
client will try its best to automatically recover (eventually).
262-
263-
See the [examples](examples) directory for example implementations of the above.

examples/.gitignore

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,4 +2,4 @@ consumer_channel_example/consumer_channel_example
22
consumer_example/consumer_example
33
producer_channel_example/producer_channel_example
44
producer_example/producer_example
5-
gofkacat/gofkacat
5+
go-kafkacat/go-kafkacat

examples/README

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,13 +7,13 @@ Examples:
77
producer_channel_example - Channel based producer
88
producer_example - Function based producer
99

10-
gofkacat - Channel based kafkacat Go clone
10+
go-kafkacat - Channel based kafkacat Go clone
1111

1212

1313
Usage example:
1414

1515
$ cd consumer_example
16-
$ go build
16+
$ go build (or 'go install')
1717
$ ./consumer_example # see usage
1818
$ ./consumer_example mybroker mygroup mytopic
1919

examples/consumer_channel_example/consumer_channel_example.go

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,6 @@
1+
// Example channel-based high-level Apache Kafka consumer
2+
package main
3+
14
/**
25
* Copyright 2016 Confluent Inc.
36
*
@@ -13,7 +16,6 @@
1316
* See the License for the specific language governing permissions and
1417
* limitations under the License.
1518
*/
16-
package main
1719

1820
import (
1921
"fmt"
@@ -63,7 +65,7 @@ func main() {
6365
fmt.Printf("Caught signal %v: terminating\n", sig)
6466
run = false
6567

66-
case ev := <-c.Events:
68+
case ev := <-c.Events():
6769
switch e := ev.(type) {
6870
case kafka.AssignedPartitions:
6971
fmt.Fprintf(os.Stderr, "%% %v\n", e)
@@ -81,9 +83,9 @@ func main() {
8183
e.TopicPartition, string(e.Value))
8284
}
8385

84-
case kafka.PartitionEof:
86+
case kafka.PartitionEOF:
8587
fmt.Printf("%% Reached %v\n", e)
86-
case kafka.KafkaError:
88+
case kafka.Error:
8789
fmt.Fprintf(os.Stderr, "%% Error: %v\n", e)
8890
run = false
8991
}

examples/consumer_example/consumer_example.go

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,6 @@
1+
// Example function-based high-level Apache Kafka consumer
2+
package main
3+
14
/**
25
* Copyright 2016 Confluent Inc.
36
*
@@ -13,7 +16,6 @@
1316
* See the License for the specific language governing permissions and
1417
* limitations under the License.
1518
*/
16-
package main
1719

1820
// consumer_example implements a consumer using the non-channel Poll() API
1921
// to retrieve messages and events.
@@ -37,8 +39,7 @@ func main() {
3739
broker := os.Args[1]
3840
group := os.Args[2]
3941
topics := os.Args[3:]
40-
41-
sigchan := make(chan os.Signal)
42+
sigchan := make(chan os.Signal, 1)
4243
signal.Notify(sigchan, syscall.SIGINT, syscall.SIGTERM)
4344

4445
c, err := kafka.NewConsumer(&kafka.ConfigMap{
@@ -79,9 +80,9 @@ func main() {
7980
e.TopicPartition, string(e.Value))
8081
}
8182

82-
case kafka.PartitionEof:
83+
case kafka.PartitionEOF:
8384
fmt.Printf("%% Reached %v\n", e)
84-
case kafka.KafkaError:
85+
case kafka.Error:
8586
fmt.Fprintf(os.Stderr, "%% Error: %v\n", e)
8687
run = false
8788
default:

0 commit comments

Comments
 (0)
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy