Struct opentelemetry_sdk::trace::BatchSpanProcessor
source · pub struct BatchSpanProcessor<R: RuntimeChannel> { /* private fields */ }
Expand description
A SpanProcessor
that asynchronously buffers finished spans and reports
them at a preconfigured interval.
Batch span processors need to run a background task to collect and send spans. Different runtimes need different ways to handle the background task.
Note: Configuring an opentelemetry Runtime
that’s not compatible with the
underlying runtime can cause deadlocks (see tokio section).
§Use with Tokio
Tokio currently offers two different schedulers. One is
current_thread_scheduler
, the other is multiple_thread_scheduler
. Both
of them default to use batch span processors to install span exporters.
Tokio’s current_thread_scheduler
can cause the program to hang forever if
blocking work is scheduled with other tasks in the same runtime. To avoid
this, be sure to enable the rt-tokio-current-thread
feature in this crate
if you are using that runtime (e.g. users of actix-web), and blocking tasks
will then be scheduled on a different thread.
§Examples
This processor can be configured with an executor
of your choice to
batch and upload spans asynchronously when they end. If you have added a
library like tokio
or async-std
, you can pass in their respective
spawn
and interval
functions to have batching performed in those
contexts.
use opentelemetry::global;
use opentelemetry_sdk::{runtime, testing::trace::NoopSpanExporter, trace};
use opentelemetry_sdk::trace::BatchConfigBuilder;
use std::time::Duration;
#[tokio::main]
async fn main() {
// Configure your preferred exporter
let exporter = NoopSpanExporter::new();
// Create a batch span processor using an exporter and a runtime
let batch = trace::BatchSpanProcessor::builder(exporter, runtime::Tokio)
.with_batch_config(BatchConfigBuilder::default().with_max_queue_size(4096).build())
.build();
// Then use the `with_batch_exporter` method to have the provider export spans in batches.
let provider = trace::TracerProvider::builder()
.with_span_processor(batch)
.build();
let _ = global::set_tracer_provider(provider);
}
Implementations§
source§impl<R: RuntimeChannel> BatchSpanProcessor<R>
impl<R: RuntimeChannel> BatchSpanProcessor<R>
sourcepub fn builder<E>(exporter: E, runtime: R) -> BatchSpanProcessorBuilder<E, R>where
E: SpanExporter,
pub fn builder<E>(exporter: E, runtime: R) -> BatchSpanProcessorBuilder<E, R>where
E: SpanExporter,
Create a new batch processor builder
Trait Implementations§
source§impl<R: RuntimeChannel> Debug for BatchSpanProcessor<R>
impl<R: RuntimeChannel> Debug for BatchSpanProcessor<R>
source§impl<R: RuntimeChannel> SpanProcessor for BatchSpanProcessor<R>
impl<R: RuntimeChannel> SpanProcessor for BatchSpanProcessor<R>
source§fn on_start(&self, _span: &mut Span, _cx: &Context)
fn on_start(&self, _span: &mut Span, _cx: &Context)
on_start
is called when a Span
is started. This method is called
synchronously on the thread that started the span, therefore it should
not block or throw exceptions.source§fn on_end(&self, span: SpanData)
fn on_end(&self, span: SpanData)
on_end
is called after a Span
is ended (i.e., the end timestamp is
already set). This method is called synchronously within the Span::end
API, therefore it should not block or throw an exception.