Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak caused by the long-lifespan trace_debug_span within the reused connection #763

Closed
ikhin opened this issue Apr 11, 2024 · 1 comment · May be fixed by #764
Closed

Memory leak caused by the long-lifespan trace_debug_span within the reused connection #763

ikhin opened this issue Apr 11, 2024 · 1 comment · May be fixed by #764

Comments

@ikhin
Copy link

ikhin commented Apr 11, 2024

We reuse the HTTP2 Connection created by hyperium/h2.

However, During the stress testing, an issue of memory leakage occurred, causing a crash and reboot of the system. So I begin the troubleshooting.

截屏2024-04-11 20 23 20

1. Start Troubleshooting with the memory growth chart

First, using Jemalloc and Jeprof, I generated memory growth chart at intervals. jeprof_memory_leak.pdf

The largest memory block was finish_grow, which occupied 88.8% of the total allocation on its own. The finish_grow function is related to Vec reallocating new memory when expanding. This may indicate that the memory leak occurred during the process of writing to Vec, where memory was allocated to these Vecs but was not released in a timely manner.

I found the function call chain upstream of finish_grow is tracing_core -> event -> Event -> dispatch, which is related to the subscriber. In our business code, the subscriber specified for tracing_core to use is opentelemetry.

let tracer = opentelemetry_jaeger::new_collector_pipeline()
    .with_endpoint(url)
    .with_service_name(server_name)
    .with_hyper()
    .install_batch(opentelemetry::runtime::Tokio)?;
tracing_subscriber::registry()
    .with(tracing_opentelemetry::layer().with_tracer(tracer))
    .try_init()?;

In opentelemetry_api, I found the function call chain finish_grow <- reserve_for_push <- push at function on_event, which includes operations for pushing events onto the events array in a span. tracing_opentelemetry/layer.rs

fn on_event(&self, event: &Event<'_>, ctx: Context<'_, S>) {
  if let Some(span) = ctx.lookup_current() {
  ...
  if let Some(ref mut events) = builder.events {
      events.push(otel_event);
  } else {
      builder.events = Some(vec![otel_event]);
  }
  ...
}

2. Start Troubleshooting with Jaeger on the other side

On the other hand, starting from Jaeger, I observed a span that is as long as the pressure test duration. This span is a trace_debug_span that was created when establishing a Connection.h2/src/proto/connection.rs
connection.rs
I've observed that there are many events under the trace_debug_span, as if an event is added for every request that comes in.
h2/src/codec/framed_read.rs
h2/src/codec/framed_write.rs
framed_write
To disable the trace_debug_span in the h2 library, you can change the RUST_LOG logging level from debug to info. After doing this and running a stress test again, you can observe that the memory no longer leaks.fine

In summary

I used the h2 library to create a persistent Connection for reuse, which results in the creation of a trace_debug_span that lasts as long as the lifespan of the connection. When a stress test is conducted with continuous requests, events are constantly added to the events array inside the trace_debug_span. As long as the persistent Connection is not closed, the trace_debug_span will not terminate and be destroyed after reporting. Therefore, the memory used by the trace_debug_span keeps increasing, leading to a memory leak.

Although we normally don't enable RUST_LOG: "debug" in production, sometimes it's necessary to turn it on for troubleshooting business issues based on debug logs. This memory leak issue has been troublesome.
Therefore, when creating a connection by default, is it possible to allow users to set trace_debug_span as an optional feature?

@seanmonstar
Copy link
Member

I don't think this is h2's fault, it rather the subscriber which is growing the events forever. Any subscriber implementation will eventually need to cap its own growth.

@seanmonstar seanmonstar closed this as not planned Won't fix, can't repro, duplicate, stale Apr 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants