Struct rdkafka::consumer::StreamConsumer
source · pub struct StreamConsumer<C = DefaultConsumerContext, R = DefaultRuntime>where
C: ConsumerContext + 'static,{ /* private fields */ }
Expand description
A high-level consumer with a Stream
interface.
This consumer doesn’t need to be polled explicitly. Extracting an item from
the stream returned by the stream
will
implicitly poll the underlying Kafka consumer.
If you activate the consumer group protocol by calling
subscribe
, the stream consumer will integrate with
librdkafka’s liveness detection as described in KIP-62. You must be sure
that you attempt to extract a message from the stream consumer at least
every max.poll.interval.ms
milliseconds, or librdkafka will assume that
the processing thread is wedged and leave the consumer groups.
Implementations§
source§impl<C, R> StreamConsumer<C, R>where
C: ConsumerContext + 'static,
impl<C, R> StreamConsumer<C, R>where
C: ConsumerContext + 'static,
sourcepub fn stream(&self) -> MessageStream<'_>
pub fn stream(&self) -> MessageStream<'_>
Constructs a stream that yields messages from this consumer.
It is legal to have multiple live message streams for the same consumer, and to move those message streams across threads. Note, however, that the message streams share the same underlying state. A message received by the consumer will be delivered to only one of the live message streams. If you seek the underlying consumer, all message streams created from the consumer will begin to draw messages from the new position of the consumer.
If you want multiple independent views of a Kafka topic, create multiple consumers, not multiple message streams.
sourcepub async fn recv(&self) -> Result<BorrowedMessage<'_>, KafkaError>
pub async fn recv(&self) -> Result<BorrowedMessage<'_>, KafkaError>
Receives the next message from the stream.
This method will block until the next message is available or an error
occurs. It is legal to call recv
from multiple threads simultaneously.
This method is cancellation safe.
Note that this method is exactly as efficient as constructing a single-use message stream and extracting one message from it:
use futures::stream::StreamExt;
consumer.stream().next().await.expect("MessageStream never returns None");
sourcepub fn split_partition_queue(
self: &Arc<Self>,
topic: &str,
partition: i32,
) -> Option<StreamPartitionQueue<C, R>>
pub fn split_partition_queue( self: &Arc<Self>, topic: &str, partition: i32, ) -> Option<StreamPartitionQueue<C, R>>
Splits messages for the specified partition into their own stream.
If the topic
or partition
is invalid, returns None
.
After calling this method, newly-fetched messages for the specified
partition will be returned via StreamPartitionQueue::recv
rather
than StreamConsumer::recv
. Note that there may be buffered messages
for the specified partition that will continue to be returned by
StreamConsumer::recv
. For best results, call split_partition_queue
before the first call to
StreamConsumer::recv
.
You must periodically await StreamConsumer::recv
, even if no messages
are expected, to serve callbacks. Consider using a background task like:
tokio::spawn(async move {
let message = stream_consumer.recv().await;
panic!("main stream consumer queue unexpectedly received message: {:?}", message);
})
Note that calling Consumer::assign
will deactivate any existing
partition queues. You will need to call this method for every partition
that should be split after every call to assign
.
Beware that this method is implemented for &Arc<Self>
, not &self
.
You will need to wrap your consumer in an Arc
in order to call this
method. This design permits moving the partition queue to another thread
while ensuring the partition queue does not outlive the consumer.
Trait Implementations§
source§impl<C, R> Consumer<C> for StreamConsumer<C, R>where
C: ConsumerContext,
impl<C, R> Consumer<C> for StreamConsumer<C, R>where
C: ConsumerContext,
source§fn group_metadata(&self) -> Option<ConsumerGroupMetadata>
fn group_metadata(&self) -> Option<ConsumerGroupMetadata>
source§fn subscribe(&self, topics: &[&str]) -> KafkaResult<()>
fn subscribe(&self, topics: &[&str]) -> KafkaResult<()>
source§fn unsubscribe(&self)
fn unsubscribe(&self)
source§fn assign(&self, assignment: &TopicPartitionList) -> KafkaResult<()>
fn assign(&self, assignment: &TopicPartitionList) -> KafkaResult<()>
source§fn seek<T: Into<Timeout>>(
&self,
topic: &str,
partition: i32,
offset: Offset,
timeout: T,
) -> KafkaResult<()>
fn seek<T: Into<Timeout>>( &self, topic: &str, partition: i32, offset: Offset, timeout: T, ) -> KafkaResult<()>
offset
for the specified topic
and partition
. After a
successful call to seek
, the next poll of the consumer will return the
message with offset
.source§fn commit(
&self,
topic_partition_list: &TopicPartitionList,
mode: CommitMode,
) -> KafkaResult<()>
fn commit( &self, topic_partition_list: &TopicPartitionList, mode: CommitMode, ) -> KafkaResult<()>
source§fn commit_consumer_state(&self, mode: CommitMode) -> KafkaResult<()>
fn commit_consumer_state(&self, mode: CommitMode) -> KafkaResult<()>
source§fn commit_message(
&self,
message: &BorrowedMessage<'_>,
mode: CommitMode,
) -> KafkaResult<()>
fn commit_message( &self, message: &BorrowedMessage<'_>, mode: CommitMode, ) -> KafkaResult<()>
source§fn store_offset(
&self,
topic: &str,
partition: i32,
offset: i64,
) -> KafkaResult<()>
fn store_offset( &self, topic: &str, partition: i32, offset: i64, ) -> KafkaResult<()>
enable.auto.offset.store
should be set to false
in the
config.source§fn store_offset_from_message(
&self,
message: &BorrowedMessage<'_>,
) -> KafkaResult<()>
fn store_offset_from_message( &self, message: &BorrowedMessage<'_>, ) -> KafkaResult<()>
Consumer::store_offset
, but the offset to store is derived from
the provided message.source§fn store_offsets(&self, tpl: &TopicPartitionList) -> KafkaResult<()>
fn store_offsets(&self, tpl: &TopicPartitionList) -> KafkaResult<()>
enable.auto.offset.store
should be set to false
in the config.source§fn subscription(&self) -> KafkaResult<TopicPartitionList>
fn subscription(&self) -> KafkaResult<TopicPartitionList>
source§fn assignment(&self) -> KafkaResult<TopicPartitionList>
fn assignment(&self) -> KafkaResult<TopicPartitionList>
source§fn committed<T>(&self, timeout: T) -> KafkaResult<TopicPartitionList>
fn committed<T>(&self, timeout: T) -> KafkaResult<TopicPartitionList>
source§fn committed_offsets<T>(
&self,
tpl: TopicPartitionList,
timeout: T,
) -> KafkaResult<TopicPartitionList>
fn committed_offsets<T>( &self, tpl: TopicPartitionList, timeout: T, ) -> KafkaResult<TopicPartitionList>
source§fn offsets_for_timestamp<T>(
&self,
timestamp: i64,
timeout: T,
) -> KafkaResult<TopicPartitionList>
fn offsets_for_timestamp<T>( &self, timestamp: i64, timeout: T, ) -> KafkaResult<TopicPartitionList>
source§fn offsets_for_times<T>(
&self,
timestamps: TopicPartitionList,
timeout: T,
) -> KafkaResult<TopicPartitionList>
fn offsets_for_times<T>( &self, timestamps: TopicPartitionList, timeout: T, ) -> KafkaResult<TopicPartitionList>
source§fn position(&self) -> KafkaResult<TopicPartitionList>
fn position(&self) -> KafkaResult<TopicPartitionList>
source§fn fetch_metadata<T>(
&self,
topic: Option<&str>,
timeout: T,
) -> KafkaResult<Metadata>
fn fetch_metadata<T>( &self, topic: Option<&str>, timeout: T, ) -> KafkaResult<Metadata>
source§fn fetch_watermarks<T>(
&self,
topic: &str,
partition: i32,
timeout: T,
) -> KafkaResult<(i64, i64)>
fn fetch_watermarks<T>( &self, topic: &str, partition: i32, timeout: T, ) -> KafkaResult<(i64, i64)>
source§fn fetch_group_list<T>(
&self,
group: Option<&str>,
timeout: T,
) -> KafkaResult<GroupList>
fn fetch_group_list<T>( &self, group: Option<&str>, timeout: T, ) -> KafkaResult<GroupList>
source§fn pause(&self, partitions: &TopicPartitionList) -> KafkaResult<()>
fn pause(&self, partitions: &TopicPartitionList) -> KafkaResult<()>
source§fn resume(&self, partitions: &TopicPartitionList) -> KafkaResult<()>
fn resume(&self, partitions: &TopicPartitionList) -> KafkaResult<()>
source§fn rebalance_protocol(&self) -> RebalanceProtocol
fn rebalance_protocol(&self) -> RebalanceProtocol
source§fn context(&self) -> &Arc<C>
fn context(&self) -> &Arc<C>
ConsumerContext
used to create this
consumer.source§impl<R> FromClientConfig for StreamConsumer<DefaultConsumerContext, R>where
R: AsyncRuntime,
impl<R> FromClientConfig for StreamConsumer<DefaultConsumerContext, R>where
R: AsyncRuntime,
source§fn from_config(config: &ClientConfig) -> KafkaResult<Self>
fn from_config(config: &ClientConfig) -> KafkaResult<Self>
source§impl<C, R> FromClientConfigAndContext<C> for StreamConsumer<C, R>where
C: ConsumerContext + 'static,
R: AsyncRuntime,
impl<C, R> FromClientConfigAndContext<C> for StreamConsumer<C, R>where
C: ConsumerContext + 'static,
R: AsyncRuntime,
Creates a new StreamConsumer
starting from a ClientConfig
.