Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: allow users to use different channels based on runtime in batch span processor #560

Merged
merged 4 commits into from Jun 5, 2021

Conversation

TommyCpp
Copy link
Contributor

@TommyCpp TommyCpp commented May 26, 2021

Resolve #520.
Close #533

Another proposed solution to remove mutex in batch span processors.

I tried to keep the type parameter within the channel types. But I don't think I figure out a way to do so without introducing a type parameter in Runtime. So currently all Senders and Receivers from Runtime will have item type as BatchMessage. Let me know there are better way to handle this.

TODO:

  • Return a better error message.

@codecov
Copy link

codecov bot commented May 26, 2021

Codecov Report

Merging #560 (e5d1717) into main (fb576b0) will increase coverage by 0.0%.
The diff coverage is 61.5%.

Impacted file tree graph

@@          Coverage Diff          @@
##            main    #560   +/-   ##
=====================================
  Coverage   54.5%   54.6%           
=====================================
  Files         97      98    +1     
  Lines       8534    8550   +16     
=====================================
+ Hits        4659    4669   +10     
- Misses      3875    3881    +6     
Impacted Files Coverage Δ
opentelemetry-datadog/src/exporter/mod.rs 16.4% <ø> (ø)
opentelemetry-jaeger/src/exporter/mod.rs 47.8% <ø> (ø)
opentelemetry-zipkin/src/exporter/mod.rs 0.0% <ø> (ø)
opentelemetry/src/global/trace.rs 15.6% <0.0%> (ø)
opentelemetry/src/sdk/trace/provider.rs 74.7% <0.0%> (ø)
opentelemetry/src/sdk/trace/runtime.rs 56.0% <56.0%> (ø)
opentelemetry/src/sdk/trace/span_processor.rs 81.7% <90.9%> (+0.3%) ⬆️
opentelemetry/src/trace/mod.rs 50.0% <0.0%> (+6.0%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update fb576b0...e5d1717. Read the comment docs.

Copy link
Member

@jtescher jtescher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

* Add error messages based on error type.
Note that the types we used in `batch_message_channel` function and `TrySend` trait are all from `trace` mod. But the runtime doesn't live in trace mod. So we use another trait `TraceRuntime` to include all function and types that related `trace` mod
@TommyCpp
Copy link
Contributor Author

TommyCpp commented Jun 1, 2021

@jtescher I realized that the implementation used types from trace mod while runtime is not part of the trace. So I separate the channel part from runtime to sdk/trace/runtime.

@jtescher
Copy link
Member

jtescher commented Jun 2, 2021

@TommyCpp seems good 👍

@TommyCpp TommyCpp marked this pull request as ready for review June 5, 2021 00:59
@TommyCpp TommyCpp requested a review from a team as a code owner June 5, 2021 00:59
@TommyCpp
Copy link
Contributor Author

TommyCpp commented Jun 5, 2021

I tested a POC of the custom channel from the unbounded channel and it didn't seem to outperform the runtime version. I'd suggest we merge this PR as we have been investigating this issue for a while.

Copy link
Member

@frigus02 frigus02 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree. This looks really good. Nice work 👍

@jtescher
Copy link
Member

jtescher commented Jun 5, 2021

Great work @TommyCpp

@jtescher jtescher merged commit 1ca62d3 into open-telemetry:main Jun 5, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Batch span processor on_end mutex
3 participants