pub trait Producer<C = DefaultProducerContext>where
C: ProducerContext + 'static,{
// Required methods
fn client(&self) -> &Client<C>;
fn in_flight_count(&self) -> i32;
fn flush<T: Into<Timeout>>(&self, timeout: T) -> KafkaResult<()>;
fn init_transactions<T: Into<Timeout>>(&self, timeout: T) -> KafkaResult<()>;
fn begin_transaction(&self) -> KafkaResult<()>;
fn send_offsets_to_transaction<T: Into<Timeout>>(
&self,
offsets: &TopicPartitionList,
cgm: &ConsumerGroupMetadata,
timeout: T,
) -> KafkaResult<()>;
fn commit_transaction<T: Into<Timeout>>(
&self,
timeout: T,
) -> KafkaResult<()>;
fn abort_transaction<T: Into<Timeout>>(&self, timeout: T) -> KafkaResult<()>;
// Provided method
fn context(&self) -> &Arc<C> { ... }
}
Expand description
Common trait for all producers.
Required Methods§
sourcefn in_flight_count(&self) -> i32
fn in_flight_count(&self) -> i32
Returns the number of messages that are either waiting to be sent or are sent but are waiting to be acknowledged.
sourcefn flush<T: Into<Timeout>>(&self, timeout: T) -> KafkaResult<()>
fn flush<T: Into<Timeout>>(&self, timeout: T) -> KafkaResult<()>
Flushes any pending messages.
This method should be called before termination to ensure delivery of
all enqueued messages. It will call poll()
internally.
sourcefn init_transactions<T: Into<Timeout>>(&self, timeout: T) -> KafkaResult<()>
fn init_transactions<T: Into<Timeout>>(&self, timeout: T) -> KafkaResult<()>
Enable sending transactions with this producer.
§Prerequisites
- The configuration used to create the producer must include a
transactional.id
setting. - You must not have sent any messages or called any of the other transaction-related functions.
§Details
This function ensures any transactions initiated by previous producers
with the same transactional.id
are completed. Any transactions left
open by any such previous producers will be aborted.
Once previous transactions have been fenced, this function acquires an internal producer ID and epoch that will be used by all transactional messages sent by this producer.
If this function returns successfully, messages may only be sent to this
producer when a transaction is active. See
Producer::begin_transaction
.
This function may block for the specified timeout
.
sourcefn begin_transaction(&self) -> KafkaResult<()>
fn begin_transaction(&self) -> KafkaResult<()>
Begins a new transaction.
§Prerequisites
You must have successfully called Producer::init_transactions
.
§Details
This function begins a new transaction, and implicitly associates that open transaction with this producer.
After a successful call to this function, any messages sent via this
producer or any calls to Producer::send_offsets_to_transaction
will
be implicitly associated with this transaction, until the transaction is
finished.
Finish the transaction by calling Producer::commit_transaction
or
Producer::abort_transaction
.
While a transaction is open, you must perform at least one transaction
operation every transaction.timeout.ms
to avoid timing out the
transaction on the broker.
sourcefn send_offsets_to_transaction<T: Into<Timeout>>(
&self,
offsets: &TopicPartitionList,
cgm: &ConsumerGroupMetadata,
timeout: T,
) -> KafkaResult<()>
fn send_offsets_to_transaction<T: Into<Timeout>>( &self, offsets: &TopicPartitionList, cgm: &ConsumerGroupMetadata, timeout: T, ) -> KafkaResult<()>
Associates an offset commit operation with this transaction.
§Prerequisites
The producer must have an open transaction via a call to
Producer::begin_transaction
.
§Details
Sends a list of topic partition offsets to the consumer group
coordinator for cgm
, and marks the offsets as part of the current
transaction. These offsets will be considered committed only if the
transaction is committed successfully.
The offsets should be the next message your application will consume, i.e., one greater than the the last processed message’s offset for each partition.
Use this method at the end of a consume-transform-produce loop, prior to
comitting the transaction with Producer::commit_transaction
.
This function may block for the specified timeout
.
§Hints
To obtain the correct consumer group metadata, call
Consumer::group_metadata
on the consumer for which offsets are being
committed.
The consumer must not have automatic commits enabled.
sourcefn commit_transaction<T: Into<Timeout>>(&self, timeout: T) -> KafkaResult<()>
fn commit_transaction<T: Into<Timeout>>(&self, timeout: T) -> KafkaResult<()>
Commits the current transaction.
§Prerequisites
The producer must have an open transaction via a call to
Producer::begin_transaction
.
§Details
Any outstanding messages will be flushed (i.e., delivered) before actually committing the transaction.
If any of the outstanding messages fail permanently, the current
transaction will enter an abortable error state and this function will
return an abortable error. You must then call
Producer::abort_transaction
before attemping to create another
transaction.
This function may block for the specified timeout
.
sourcefn abort_transaction<T: Into<Timeout>>(&self, timeout: T) -> KafkaResult<()>
fn abort_transaction<T: Into<Timeout>>(&self, timeout: T) -> KafkaResult<()>
Aborts the current transaction.
§Prerequisites
The producer must have an open transaction via a call to
Producer::begin_transaction
.
§Details
Any oustanding messages will be purged and failed with
RDKafkaErrorCode::PurgeInflight
or RDKafkaErrorCode::PurgeQueue
.
This function should also be used to recover from non-fatal abortable transaction errors.
This function may block for the specified timeout
.
Provided Methods§
sourcefn context(&self) -> &Arc<C>
fn context(&self) -> &Arc<C>
Returns a reference to the ProducerContext
used to create this
producer.