pub struct Batcher { /* private fields */ }
Expand description
A batcher can accept messages into an internal buffer, and report when messages must be flushed.
The recommended usage pattern looks something like this:
use segment::{Batcher, Client, HttpClient};
use segment::message::{BatchMessage, Track, User};
use serde_json::json;
let mut batcher = Batcher::new(None);
let client = HttpClient::default();
for i in 0..100 {
let msg = Track {
user: User::UserId { user_id: format!("user-{}", i) },
event: "Example".to_owned(),
properties: json!({ "foo": "bar" }),
..Default::default()
};
// Batcher returns back ownership of a message if the internal buffer
// would overflow.
//
// When this occurs, we flush the batcher, create a new batcher, and add
// the message into the new batcher.
if let Some(msg) = batcher.push(msg).unwrap() {
client.send("your_write_key".to_string(), batcher.into_message());
batcher = Batcher::new(None);
batcher.push(msg).unwrap();
}
}
Batcher will attempt to fit messages into maximally-sized batches, thus reducing the number of round trips required with Segment’s tracking API. However, if you produce messages infrequently, this may significantly delay the sending of messages to Segment.
If this delay is a concern, it is recommended that you periodically flush
the batcher on your own by calling into_message
.
By default if the message you push in the batcher does not contains any timestamp, the timestamp at the time of the push will be automatically added to your message. You can disable this behaviour with the [without_auto_timestamp] method though.
Implementations§
source§impl Batcher
impl Batcher
sourcepub fn new(context: Option<Value>) -> Self
pub fn new(context: Option<Value>) -> Self
Construct a new, empty batcher.
Optionally, you may specify a context
that should be set on every
batch returned by into_message
.
pub fn without_auto_timestamp(&mut self)
sourcepub fn push(
&mut self,
msg: impl Into<BatchMessage>,
) -> Result<Option<BatchMessage>>
pub fn push( &mut self, msg: impl Into<BatchMessage>, ) -> Result<Option<BatchMessage>>
Push a message into the batcher.
Returns Ok(None)
if the message was accepted and is now owned by the
batcher.
Returns Ok(Some(msg))
if the message was rejected because the current
batch would be oversized if this message were accepted. The given
message is returned back, and it is recommended that you flush the
current batch before attempting to push msg
in again.
Returns an error if the message is too large to be sent to Segment’s API.
sourcepub fn into_message(self) -> Message
pub fn into_message(self) -> Message
Consumes this batcher and converts it into a message that can be sent to Segment.
Trait Implementations§
Auto Trait Implementations§
impl Freeze for Batcher
impl RefUnwindSafe for Batcher
impl Send for Batcher
impl Sync for Batcher
impl Unpin for Batcher
impl UnwindSafe for Batcher
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
source§default unsafe fn clone_to_uninit(&self, dst: *mut T)
default unsafe fn clone_to_uninit(&self, dst: *mut T)
clone_to_uninit
)