Struct rdkafka::consumer::base_consumer::BaseConsumer
source · pub struct BaseConsumer<C = DefaultConsumerContext>where
C: ConsumerContext + 'static,{ /* private fields */ }
Expand description
A low-level consumer that requires manual polling.
This consumer must be periodically polled to make progress on rebalancing, callbacks and to receive messages.
Implementations§
source§impl<C> BaseConsumer<C>where
C: ConsumerContext,
impl<C> BaseConsumer<C>where
C: ConsumerContext,
sourcepub fn poll<T: Into<Timeout>>(
&self,
timeout: T,
) -> Option<KafkaResult<BorrowedMessage<'_>>>
pub fn poll<T: Into<Timeout>>( &self, timeout: T, ) -> Option<KafkaResult<BorrowedMessage<'_>>>
Polls the consumer for new messages.
It won’t block for more than the specified timeout. Use zero Duration
for non-blocking
call. With no timeout it blocks until an event is received.
This method should be called at regular intervals, even if no message is expected, to serve any queued callbacks waiting to be called. This is especially important for automatic consumer rebalance, as the rebalance function will be executed by the thread calling the poll() function.
§Lifetime
The returned message lives in the memory of the consumer and cannot outlive it.
sourcepub fn iter(&self) -> Iter<'_, C> ⓘ
pub fn iter(&self) -> Iter<'_, C> ⓘ
Returns an iterator over the available messages.
It repeatedly calls poll
with no timeout.
Note that it’s also possible to iterate over the consumer directly.
§Examples
All these are equivalent and will receive messages without timing out.
loop {
let message = consumer.poll(None);
// Handle the message
}
for message in consumer.iter() {
// Handle the message
}
for message in &consumer {
// Handle the message
}
sourcepub fn split_partition_queue(
self: &Arc<Self>,
topic: &str,
partition: i32,
) -> Option<PartitionQueue<C>>
pub fn split_partition_queue( self: &Arc<Self>, topic: &str, partition: i32, ) -> Option<PartitionQueue<C>>
Splits messages for the specified partition into their own queue.
If the topic
or partition
is invalid, returns None
.
After calling this method, newly-fetched messages for the specified
partition will be returned via PartitionQueue::poll
rather than
BaseConsumer::poll
. Note that there may be buffered messages for the
specified partition that will continue to be returned by
BaseConsumer::poll
. For best results, call split_partition_queue
before the first call to BaseConsumer::poll
.
You must continue to call BaseConsumer::poll
, even if no messages are
expected, to serve callbacks.
Note that calling Consumer::assign
will deactivate any existing
partition queues. You will need to call this method for every partition
that should be split after every call to assign
.
Beware that this method is implemented for &Arc<Self>
, not &self
.
You will need to wrap your consumer in an Arc
in order to call this
method. This design permits moving the partition queue to another thread
while ensuring the partition queue does not outlive the consumer.
Trait Implementations§
source§impl<C> Consumer<C> for BaseConsumer<C>where
C: ConsumerContext,
impl<C> Consumer<C> for BaseConsumer<C>where
C: ConsumerContext,
source§fn group_metadata(&self) -> Option<ConsumerGroupMetadata>
fn group_metadata(&self) -> Option<ConsumerGroupMetadata>
source§fn subscribe(&self, topics: &[&str]) -> KafkaResult<()>
fn subscribe(&self, topics: &[&str]) -> KafkaResult<()>
source§fn unsubscribe(&self)
fn unsubscribe(&self)
source§fn assign(&self, assignment: &TopicPartitionList) -> KafkaResult<()>
fn assign(&self, assignment: &TopicPartitionList) -> KafkaResult<()>
source§fn seek<T: Into<Timeout>>(
&self,
topic: &str,
partition: i32,
offset: Offset,
timeout: T,
) -> KafkaResult<()>
fn seek<T: Into<Timeout>>( &self, topic: &str, partition: i32, offset: Offset, timeout: T, ) -> KafkaResult<()>
offset
for the specified topic
and partition
. After a
successful call to seek
, the next poll of the consumer will return the
message with offset
.source§fn commit(
&self,
topic_partition_list: &TopicPartitionList,
mode: CommitMode,
) -> KafkaResult<()>
fn commit( &self, topic_partition_list: &TopicPartitionList, mode: CommitMode, ) -> KafkaResult<()>
source§fn commit_consumer_state(&self, mode: CommitMode) -> KafkaResult<()>
fn commit_consumer_state(&self, mode: CommitMode) -> KafkaResult<()>
source§fn commit_message(
&self,
message: &BorrowedMessage<'_>,
mode: CommitMode,
) -> KafkaResult<()>
fn commit_message( &self, message: &BorrowedMessage<'_>, mode: CommitMode, ) -> KafkaResult<()>
source§fn store_offset(
&self,
topic: &str,
partition: i32,
offset: i64,
) -> KafkaResult<()>
fn store_offset( &self, topic: &str, partition: i32, offset: i64, ) -> KafkaResult<()>
enable.auto.offset.store
should be set to false
in the
config.source§fn store_offset_from_message(
&self,
message: &BorrowedMessage<'_>,
) -> KafkaResult<()>
fn store_offset_from_message( &self, message: &BorrowedMessage<'_>, ) -> KafkaResult<()>
Consumer::store_offset
, but the offset to store is derived from
the provided message.source§fn store_offsets(&self, tpl: &TopicPartitionList) -> KafkaResult<()>
fn store_offsets(&self, tpl: &TopicPartitionList) -> KafkaResult<()>
enable.auto.offset.store
should be set to false
in the config.source§fn subscription(&self) -> KafkaResult<TopicPartitionList>
fn subscription(&self) -> KafkaResult<TopicPartitionList>
source§fn assignment(&self) -> KafkaResult<TopicPartitionList>
fn assignment(&self) -> KafkaResult<TopicPartitionList>
source§fn committed<T: Into<Timeout>>(
&self,
timeout: T,
) -> KafkaResult<TopicPartitionList>
fn committed<T: Into<Timeout>>( &self, timeout: T, ) -> KafkaResult<TopicPartitionList>
source§fn committed_offsets<T: Into<Timeout>>(
&self,
tpl: TopicPartitionList,
timeout: T,
) -> KafkaResult<TopicPartitionList>
fn committed_offsets<T: Into<Timeout>>( &self, tpl: TopicPartitionList, timeout: T, ) -> KafkaResult<TopicPartitionList>
source§fn offsets_for_timestamp<T: Into<Timeout>>(
&self,
timestamp: i64,
timeout: T,
) -> KafkaResult<TopicPartitionList>
fn offsets_for_timestamp<T: Into<Timeout>>( &self, timestamp: i64, timeout: T, ) -> KafkaResult<TopicPartitionList>
source§fn offsets_for_times<T: Into<Timeout>>(
&self,
timestamps: TopicPartitionList,
timeout: T,
) -> KafkaResult<TopicPartitionList>
fn offsets_for_times<T: Into<Timeout>>( &self, timestamps: TopicPartitionList, timeout: T, ) -> KafkaResult<TopicPartitionList>
source§fn position(&self) -> KafkaResult<TopicPartitionList>
fn position(&self) -> KafkaResult<TopicPartitionList>
source§fn fetch_metadata<T: Into<Timeout>>(
&self,
topic: Option<&str>,
timeout: T,
) -> KafkaResult<Metadata>
fn fetch_metadata<T: Into<Timeout>>( &self, topic: Option<&str>, timeout: T, ) -> KafkaResult<Metadata>
source§fn fetch_watermarks<T: Into<Timeout>>(
&self,
topic: &str,
partition: i32,
timeout: T,
) -> KafkaResult<(i64, i64)>
fn fetch_watermarks<T: Into<Timeout>>( &self, topic: &str, partition: i32, timeout: T, ) -> KafkaResult<(i64, i64)>
source§fn fetch_group_list<T: Into<Timeout>>(
&self,
group: Option<&str>,
timeout: T,
) -> KafkaResult<GroupList>
fn fetch_group_list<T: Into<Timeout>>( &self, group: Option<&str>, timeout: T, ) -> KafkaResult<GroupList>
source§fn pause(&self, partitions: &TopicPartitionList) -> KafkaResult<()>
fn pause(&self, partitions: &TopicPartitionList) -> KafkaResult<()>
source§fn resume(&self, partitions: &TopicPartitionList) -> KafkaResult<()>
fn resume(&self, partitions: &TopicPartitionList) -> KafkaResult<()>
source§fn rebalance_protocol(&self) -> RebalanceProtocol
fn rebalance_protocol(&self) -> RebalanceProtocol
source§fn context(&self) -> &Arc<C>
fn context(&self) -> &Arc<C>
ConsumerContext
used to create this
consumer.source§impl FromClientConfig for BaseConsumer
impl FromClientConfig for BaseConsumer
source§fn from_config(config: &ClientConfig) -> KafkaResult<BaseConsumer>
fn from_config(config: &ClientConfig) -> KafkaResult<BaseConsumer>
source§impl<C: ConsumerContext> FromClientConfigAndContext<C> for BaseConsumer<C>
impl<C: ConsumerContext> FromClientConfigAndContext<C> for BaseConsumer<C>
Creates a new BaseConsumer
starting from a ClientConfig
.