Skip to content

Expose size metrics for batch sizes #13002

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

iblancasa
Copy link
Contributor

@iblancasa iblancasa commented May 8, 2025

Description

Expose some metrics for batch sizes.

Link to tracking issue

Fixes #12894

@iblancasa iblancasa requested review from bogdandrutu, dmitryax and a team as code owners May 8, 2025 15:05
@iblancasa iblancasa marked this pull request as draft May 8, 2025 15:05
Copy link
Contributor

This PR was marked stale due to lack of activity. It will be closed in 14 days.

@github-actions github-actions bot added Stale and removed Stale labels May 23, 2025
@iblancasa iblancasa marked this pull request as ready for review May 27, 2025 10:20
@iblancasa iblancasa requested review from mx-psi and dmathieu as code owners May 27, 2025 10:20
Copy link

codecov bot commented May 27, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 91.33%. Comparing base (c9aaed8) to head (3667e9f).

Additional details and impacted files
@@            Coverage Diff             @@
##             main   #13002      +/-   ##
==========================================
+ Coverage   91.29%   91.33%   +0.04%     
==========================================
  Files         509      509              
  Lines       28735    28786      +51     
==========================================
+ Hits        26233    26293      +60     
+ Misses       1988     1982       -6     
+ Partials      514      511       -3     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@iblancasa iblancasa force-pushed the 12894 branch 2 times, most recently from 9541083 to d382a21 Compare May 29, 2025 16:43
@iblancasa
Copy link
Contributor Author

Ping @open-telemetry/collector-approvers

Copy link
Contributor

@jade-guiton-dd jade-guiton-dd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for tackling this. I think there are a few issues with the current implementation (and a bunch of nitpicks as well, but don't feel the need to address all of those).

Independently of the code itself, since the original issue only called for a way to measure batch sizes in bytes, and considering the relatively large size of this PR, maybe it would be worth splitting the latency/flush reason metrics into their own PRs for easier review?

@iblancasa
Copy link
Contributor Author

Thank you for tackling this. I think there are a few issues with the current implementation (and a bunch of nitpicks as well, but don't feel the need to address all of those).

Thank you for your fast review! I will take care of all fo them.

Independently of the code itself, since the original issue only called for a way to measure batch sizes in bytes, and considering the relatively large size of this PR, maybe it would be worth splitting the latency/flush reason metrics into their own PRs for easier review?

Sure. I will do.

Thanks again.

@iblancasa iblancasa changed the title Expose metrics for batch sizes Expose size metrics for batch sizes Jun 6, 2025
Copy link
Contributor

@jade-guiton-dd jade-guiton-dd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the update! A few more comments

if instr, ok := qb.telemetryBuilder.ExporterBatchSendSizeBytes.(interface{ Enabled(context.Context) bool }); !ok || instr.Enabled(ctx) {
qb.telemetryBuilder.ExporterBatchSendSizeBytes.Record(ctx, int64(req.BytesSize()))
}
}
Copy link
Contributor

@jade-guiton-dd jade-guiton-dd Jun 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure shardBatcher.flush is the right place for this:

  • it's only used if batching is enabled;
  • it's only called when data exits the batcher.

I think that, if the aim is to help people size their queues, it would make more sense to measure the size of batches entering (or even just attempting to enter) the queue. Issue #12894 says "records the size in bytes of each batch added to the queue", so I think that was what Dan had in mind.

Given this, I think it would make more sense to put the instrumentation in obsQueue.Offer, especially since emitting metrics is the purpose of obsQueue and it already has a reference to a TelemetryBuilder.

Copy link
Contributor Author

@iblancasa iblancasa Jun 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure about that.. maybe we are talking about two different kind of metrics.

In our case, for instance, we want to measure this: the size of the batches that we are creating by the batching mechanism in the exporter. Not the size of the batch before entering the queue.

From this comment, I understand doing it in obsQueue is not possible:

func (or *obsQueue[T]) Offer(ctx context.Context, req T) error {
// Have to read the number of items before sending the request since the request can
// be modified by the downstream components like the batcher.

Also, I want to expose other metrics like counting how many of the times we are sending data because flush_timeout, for instance.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In our case, for instance, we want to measure this: the size of the batches that we are creating by the batching mechanism in the exporter. Not the size of the batch before entering the queue.

Issue #12894 is explicitly concerned with helping users size their (especially persistent) queue. Perhaps both metrics are useful, but only the one measuring sizes before the queue would be a move towards resolving that issue.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps this PR can offer a otelcol_exporter_queue_batch_size metric which measures items coming into the queue, and your other PR that tracks batcher metrics (such as reasons for flushing) can add a separate otelcol_exporter_batcher_batch_size metric measuring items coming into the batcher. Although we'll have to motivate the use case for it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, will work on it.

Comment on lines 215 to 217
tb, err := metadata.NewTelemetryBuilder(tt.NewTelemetrySettings())
require.NoError(t, err)
t.Cleanup(func() { tb.Shutdown() })
Copy link
Contributor

@jade-guiton-dd jade-guiton-dd Jun 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given how many times it's repeated, it might be worth adding a test helper for this.

Signed-off-by: Israel Blancas <iblancasa@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[exporterhelper] Metrics for storage allocation
2 participants
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy