pub struct StreamConsumer<C = DefaultConsumerContext, R = DefaultRuntime>
where C: ConsumerContext + 'static,
{ /* private fields */ }
Expand description

A high-level consumer with a Stream interface.

This consumer doesn’t need to be polled explicitly. Extracting an item from the stream returned by the stream will implicitly poll the underlying Kafka consumer.

If you activate the consumer group protocol by calling subscribe, the stream consumer will integrate with librdkafka’s liveness detection as described in KIP-62. You must be sure that you attempt to extract a message from the stream consumer at least every max.poll.interval.ms milliseconds, or librdkafka will assume that the processing thread is wedged and leave the consumer groups.

Implementations§

source§

impl<C, R> StreamConsumer<C, R>
where C: ConsumerContext + 'static,

source

pub fn stream(&self) -> MessageStream<'_>

Constructs a stream that yields messages from this consumer.

It is legal to have multiple live message streams for the same consumer, and to move those message streams across threads. Note, however, that the message streams share the same underlying state. A message received by the consumer will be delivered to only one of the live message streams. If you seek the underlying consumer, all message streams created from the consumer will begin to draw messages from the new position of the consumer.

If you want multiple independent views of a Kafka topic, create multiple consumers, not multiple message streams.

source

pub async fn recv(&self) -> Result<BorrowedMessage<'_>, KafkaError>

Receives the next message from the stream.

This method will block until the next message is available or an error occurs. It is legal to call recv from multiple threads simultaneously.

This method is cancellation safe.

Note that this method is exactly as efficient as constructing a single-use message stream and extracting one message from it:

use futures::stream::StreamExt;

consumer.stream().next().await.expect("MessageStream never returns None");
source

pub fn split_partition_queue( self: &Arc<Self>, topic: &str, partition: i32 ) -> Option<StreamPartitionQueue<C, R>>

Splits messages for the specified partition into their own stream.

If the topic or partition is invalid, returns None.

After calling this method, newly-fetched messages for the specified partition will be returned via StreamPartitionQueue::recv rather than StreamConsumer::recv. Note that there may be buffered messages for the specified partition that will continue to be returned by StreamConsumer::recv. For best results, call split_partition_queue before the first call to StreamConsumer::recv.

You must periodically await StreamConsumer::recv, even if no messages are expected, to serve callbacks. Consider using a background task like:

tokio::spawn(async move {
    let message = stream_consumer.recv().await;
    panic!("main stream consumer queue unexpectedly received message: {:?}", message);
})

Note that calling Consumer::assign will deactivate any existing partition queues. You will need to call this method for every partition that should be split after every call to assign.

Beware that this method is implemented for &Arc<Self>, not &self. You will need to wrap your consumer in an Arc in order to call this method. This design permits moving the partition queue to another thread while ensuring the partition queue does not outlive the consumer.

Trait Implementations§

source§

impl<C, R> Consumer<C> for StreamConsumer<C, R>
where C: ConsumerContext,

source§

fn client(&self) -> &Client<C>

Returns the Client underlying this consumer.
source§

fn group_metadata(&self) -> Option<ConsumerGroupMetadata>

Returns the current consumer group metadata associated with the consumer. Read more
source§

fn subscribe(&self, topics: &[&str]) -> KafkaResult<()>

Subscribes the consumer to a list of topics.
source§

fn unsubscribe(&self)

Unsubscribes the current subscription list.
source§

fn assign(&self, assignment: &TopicPartitionList) -> KafkaResult<()>

Manually assigns topics and partitions to the consumer. If used, automatic consumer rebalance won’t be activated.
source§

fn seek<T: Into<Timeout>>( &self, topic: &str, partition: i32, offset: Offset, timeout: T ) -> KafkaResult<()>

Seeks to offset for the specified topic and partition. After a successful call to seek, the next poll of the consumer will return the message with offset.
source§

fn commit( &self, topic_partition_list: &TopicPartitionList, mode: CommitMode ) -> KafkaResult<()>

Commits the offset of the specified message. The commit can be sync (blocking), or async. Notice that when a specific offset is committed, all the previous offsets are considered committed as well. Use this method only if you are processing messages in order. Read more
source§

fn commit_consumer_state(&self, mode: CommitMode) -> KafkaResult<()>

Commits the current consumer state. Notice that if the consumer fails after a message has been received, but before the message has been processed by the user code, this might lead to data loss. Check the “at-least-once delivery” section in the readme for more information.
source§

fn commit_message( &self, message: &BorrowedMessage<'_>, mode: CommitMode ) -> KafkaResult<()>

Commit the provided message. Note that this will also automatically commit every message with lower offset within the same partition. Read more
source§

fn store_offset( &self, topic: &str, partition: i32, offset: i64 ) -> KafkaResult<()>

Stores offset to be used on the next (auto)commit. When using this enable.auto.offset.store should be set to false in the config.
source§

fn store_offset_from_message( &self, message: &BorrowedMessage<'_> ) -> KafkaResult<()>

Like Consumer::store_offset, but the offset to store is derived from the provided message.
source§

fn store_offsets(&self, tpl: &TopicPartitionList) -> KafkaResult<()>

Store offsets to be used on the next (auto)commit. When using this enable.auto.offset.store should be set to false in the config.
source§

fn subscription(&self) -> KafkaResult<TopicPartitionList>

Returns the current topic subscription.
source§

fn assignment(&self) -> KafkaResult<TopicPartitionList>

Returns the current partition assignment.
source§

fn committed<T>(&self, timeout: T) -> KafkaResult<TopicPartitionList>
where T: Into<Timeout>, Self: Sized,

Retrieves the committed offsets for topics and partitions.
source§

fn committed_offsets<T>( &self, tpl: TopicPartitionList, timeout: T ) -> KafkaResult<TopicPartitionList>
where T: Into<Timeout>,

Retrieves the committed offsets for specified topics and partitions.
source§

fn offsets_for_timestamp<T>( &self, timestamp: i64, timeout: T ) -> KafkaResult<TopicPartitionList>
where T: Into<Timeout>, Self: Sized,

Looks up the offsets for this consumer’s partitions by timestamp.
source§

fn offsets_for_times<T>( &self, timestamps: TopicPartitionList, timeout: T ) -> KafkaResult<TopicPartitionList>
where T: Into<Timeout>, Self: Sized,

Looks up the offsets for the specified partitions by timestamp.
source§

fn position(&self) -> KafkaResult<TopicPartitionList>

Retrieve current positions (offsets) for topics and partitions.
source§

fn fetch_metadata<T>( &self, topic: Option<&str>, timeout: T ) -> KafkaResult<Metadata>
where T: Into<Timeout>, Self: Sized,

Returns the metadata information for the specified topic, or for all topics in the cluster if no topic is specified.
source§

fn fetch_watermarks<T>( &self, topic: &str, partition: i32, timeout: T ) -> KafkaResult<(i64, i64)>
where T: Into<Timeout>, Self: Sized,

Returns the low and high watermarks for a specific topic and partition.
source§

fn fetch_group_list<T>( &self, group: Option<&str>, timeout: T ) -> KafkaResult<GroupList>
where T: Into<Timeout>, Self: Sized,

Returns the group membership information for the given group. If no group is specified, all groups will be returned.
source§

fn pause(&self, partitions: &TopicPartitionList) -> KafkaResult<()>

Pauses consumption for the provided list of partitions.
source§

fn resume(&self, partitions: &TopicPartitionList) -> KafkaResult<()>

Resumes consumption for the provided list of partitions.
source§

fn rebalance_protocol(&self) -> RebalanceProtocol

Reports the rebalance protocol in use.
source§

fn context(&self) -> &Arc<C>

Returns a reference to the ConsumerContext used to create this consumer.
source§

impl<R> FromClientConfig for StreamConsumer<DefaultConsumerContext, R>
where R: AsyncRuntime,

source§

fn from_config(config: &ClientConfig) -> KafkaResult<Self>

Creates a client from a client configuration. The default client context will be used.
source§

impl<C, R> FromClientConfigAndContext<C> for StreamConsumer<C, R>
where C: ConsumerContext + 'static, R: AsyncRuntime,

Creates a new StreamConsumer starting from a ClientConfig.

source§

fn from_config_and_context( config: &ClientConfig, context: C ) -> KafkaResult<Self>

Creates a client from a client configuration and a client context.

Auto Trait Implementations§

§

impl<C = DefaultConsumerContext, R = TokioRuntime> !RefUnwindSafe for StreamConsumer<C, R>

§

impl<C, R> Send for StreamConsumer<C, R>
where R: Send,

§

impl<C, R> Sync for StreamConsumer<C, R>
where R: Sync,

§

impl<C, R> Unpin for StreamConsumer<C, R>
where R: Unpin,

§

impl<C = DefaultConsumerContext, R = TokioRuntime> !UnwindSafe for StreamConsumer<C, R>

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T> Instrument for T

source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
source§

impl<T> WithSubscriber for T

source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more