-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Expose size metrics for batch sizes #13002
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
This PR was marked stale due to lack of activity. It will be closed in 14 days. |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #13002 +/- ##
==========================================
+ Coverage 91.29% 91.33% +0.04%
==========================================
Files 509 509
Lines 28735 28786 +51
==========================================
+ Hits 26233 26293 +60
+ Misses 1988 1982 -6
+ Partials 514 511 -3 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
9541083
to
d382a21
Compare
Ping @open-telemetry/collector-approvers |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for tackling this. I think there are a few issues with the current implementation (and a bunch of nitpicks as well, but don't feel the need to address all of those).
Independently of the code itself, since the original issue only called for a way to measure batch sizes in bytes, and considering the relatively large size of this PR, maybe it would be worth splitting the latency/flush reason metrics into their own PRs for easier review?
Thank you for your fast review! I will take care of all fo them.
Sure. I will do. Thanks again. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the update! A few more comments
if instr, ok := qb.telemetryBuilder.ExporterBatchSendSizeBytes.(interface{ Enabled(context.Context) bool }); !ok || instr.Enabled(ctx) { | ||
qb.telemetryBuilder.ExporterBatchSendSizeBytes.Record(ctx, int64(req.BytesSize())) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure shardBatcher.flush
is the right place for this:
- it's only used if batching is enabled;
- it's only called when data exits the batcher.
I think that, if the aim is to help people size their queues, it would make more sense to measure the size of batches entering (or even just attempting to enter) the queue. Issue #12894 says "records the size in bytes of each batch added to the queue", so I think that was what Dan had in mind.
Given this, I think it would make more sense to put the instrumentation in obsQueue.Offer
, especially since emitting metrics is the purpose of obsQueue
and it already has a reference to a TelemetryBuilder.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure about that.. maybe we are talking about two different kind of metrics.
In our case, for instance, we want to measure this: the size of the batches that we are creating by the batching mechanism in the exporter. Not the size of the batch before entering the queue.
From this comment, I understand doing it in obsQueue
is not possible:
opentelemetry-collector/exporter/exporterhelper/internal/queuebatch/obs_queue.go
Lines 85 to 87 in 9a620a0
func (or *obsQueue[T]) Offer(ctx context.Context, req T) error { | |
// Have to read the number of items before sending the request since the request can | |
// be modified by the downstream components like the batcher. |
Also, I want to expose other metrics like counting how many of the times we are sending data because flush_timeout
, for instance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In our case, for instance, we want to measure this: the size of the batches that we are creating by the batching mechanism in the exporter. Not the size of the batch before entering the queue.
Issue #12894 is explicitly concerned with helping users size their (especially persistent) queue. Perhaps both metrics are useful, but only the one measuring sizes before the queue would be a move towards resolving that issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps this PR can offer a otelcol_exporter_queue_batch_size
metric which measures items coming into the queue, and your other PR that tracks batcher metrics (such as reasons for flushing) can add a separate otelcol_exporter_batcher_batch_size
metric measuring items coming into the batcher. Although we'll have to motivate the use case for it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, will work on it.
tb, err := metadata.NewTelemetryBuilder(tt.NewTelemetrySettings()) | ||
require.NoError(t, err) | ||
t.Cleanup(func() { tb.Shutdown() }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given how many times it's repeated, it might be worth adding a test helper for this.
Signed-off-by: Israel Blancas <iblancasa@gmail.com>
Description
Expose some metrics for batch sizes.
Link to tracking issue
Fixes #12894