-
Notifications
You must be signed in to change notification settings - Fork 1.6k
[exporter][batcher] Multi-batch support - Version 2 #12760
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[exporter][batcher] Multi-batch support - Version 2 #12760
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #12760 +/- ##
==========================================
+ Coverage 91.29% 91.30% +0.01%
==========================================
Files 508 509 +1
Lines 28695 28751 +56
==========================================
+ Hits 26196 26252 +56
Misses 1986 1986
Partials 513 513 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
2834e5c
to
e501416
Compare
46eca7f
to
182507f
Compare
} | ||
|
||
func newMultiBatcher(bCfg BatchConfig, bSet batcherSettings[request.Request]) *multiBatcher { | ||
// TODO: Determine what is the right behavior for this in combination with async queue. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What question(s) are there? I'm not sure I'm following.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment is duplicated from newDefaultBatcher()
. IIUC, this is related to how goroutines are allocated when async queue and the batcher are used together. Right now:
AsyncQueue
owns a goroutine pool of size n that is in charge of reading from the queue and callingBatcher::consume()
.Batcher::consume()
is in charge of appending the new item to the batch and invoking the flushing goroutine if neededBatcher
can allocate up to m goroutines for dispatching the active batch.
Both n
and m
come from the same config field sending_queue::num_consumers
, but it does not necessarily make sense use the same number of goroutines for "reading from queue" and "dispatching the request'
0c09d43
to
57c510d
Compare
57c510d
to
c33eb25
Compare
e4366df
to
dee077d
Compare
c9c5349
to
5c0eaf8
Compare
891c0d1
to
f53126b
Compare
go tidy
4496dd3
to
29d524b
Compare
29d524b
to
c206c68
Compare
Description
This PR introduces two new components
Partitioner
- an interface for fetching batch key. A partitioner type should implement the functionGetKey()
which returns the batching key.Partitioner
should be provided to thequeue_bacher
along withsizer
inqueue_batch::Settings
.multi_batcher
. It supports key-based batching by routing the requests to a correspondingshard_batcher
. Eachshard_batcher
corresponds to a shard described in Exporter batcher dynamic sharding of the partitions #12473.Link to tracking issue
#12795
Testing
Documentation