Struct aws_sdk_kinesis::client::fluent_builders::UpdateShardCount
source · pub struct UpdateShardCount { /* private fields */ }
Expand description
Fluent builder constructing a request to UpdateShardCount
.
Updates the shard count of the specified stream to the specified number of shards. This API is only supported for the data streams with the provisioned capacity mode.
When invoking this API, it is recommended you use the StreamARN
input parameter rather than the StreamName
input parameter.
Updating the shard count is an asynchronous operation. Upon receiving the request, Kinesis Data Streams returns immediately and sets the status of the stream to UPDATING
. After the update is complete, Kinesis Data Streams sets the status of the stream back to ACTIVE
. Depending on the size of the stream, the scaling action could take a few minutes to complete. You can continue to read and write data to your stream while its status is UPDATING
.
To update the shard count, Kinesis Data Streams performs splits or merges on individual shards. This can cause short-lived shards to be created, in addition to the final shards. These short-lived shards count towards your total shard limit for your account in the Region.
When using this operation, we recommend that you specify a target shard count that is a multiple of 25% (25%, 50%, 75%, 100%). You can specify any target value within your shard limit. However, if you specify a target that isn't a multiple of 25%, the scaling action might take longer to complete.
This operation has the following default limits. By default, you cannot do the following:
-
Scale more than ten times per rolling 24-hour period per stream
-
Scale up to more than double your current shard count for a stream
-
Scale down below half your current shard count for a stream
-
Scale up to more than 10000 shards in a stream
-
Scale a stream with more than 10000 shards down unless the result is less than 10000 shards
-
Scale up to more than the shard limit for your account
For the default limits for an Amazon Web Services account, see Streams Limits in the Amazon Kinesis Data Streams Developer Guide. To request an increase in the call rate limit, the shard limit for this API, or your overall shard limit, use the limits form.
Implementations§
source§impl UpdateShardCount
impl UpdateShardCount
sourcepub async fn customize(
self
) -> Result<CustomizableOperation<UpdateShardCount, AwsResponseRetryClassifier>, SdkError<UpdateShardCountError>>
pub async fn customize(
self
) -> Result<CustomizableOperation<UpdateShardCount, AwsResponseRetryClassifier>, SdkError<UpdateShardCountError>>
Consume this builder, creating a customizable operation that can be modified before being sent. The operation’s inner http::Request can be modified as well.
sourcepub async fn send(
self
) -> Result<UpdateShardCountOutput, SdkError<UpdateShardCountError>>
pub async fn send(
self
) -> Result<UpdateShardCountOutput, SdkError<UpdateShardCountError>>
Sends the request and returns the response.
If an error occurs, an SdkError
will be returned with additional details that
can be matched against.
By default, any retryable failures will be retried twice. Retry behavior is configurable with the RetryConfig, which can be set when configuring the client.
sourcepub fn stream_name(self, input: impl Into<String>) -> Self
pub fn stream_name(self, input: impl Into<String>) -> Self
The name of the stream.
sourcepub fn set_stream_name(self, input: Option<String>) -> Self
pub fn set_stream_name(self, input: Option<String>) -> Self
The name of the stream.
sourcepub fn target_shard_count(self, input: i32) -> Self
pub fn target_shard_count(self, input: i32) -> Self
The new number of shards. This value has the following default limits. By default, you cannot do the following:
-
Set this value to more than double your current shard count for a stream.
-
Set this value below half your current shard count for a stream.
-
Set this value to more than 10000 shards in a stream (the default limit for shard count per stream is 10000 per account per region), unless you request a limit increase.
-
Scale a stream with more than 10000 shards down unless you set this value to less than 10000 shards.
sourcepub fn set_target_shard_count(self, input: Option<i32>) -> Self
pub fn set_target_shard_count(self, input: Option<i32>) -> Self
The new number of shards. This value has the following default limits. By default, you cannot do the following:
-
Set this value to more than double your current shard count for a stream.
-
Set this value below half your current shard count for a stream.
-
Set this value to more than 10000 shards in a stream (the default limit for shard count per stream is 10000 per account per region), unless you request a limit increase.
-
Scale a stream with more than 10000 shards down unless you set this value to less than 10000 shards.
sourcepub fn scaling_type(self, input: ScalingType) -> Self
pub fn scaling_type(self, input: ScalingType) -> Self
The scaling type. Uniform scaling creates shards of equal size.
sourcepub fn set_scaling_type(self, input: Option<ScalingType>) -> Self
pub fn set_scaling_type(self, input: Option<ScalingType>) -> Self
The scaling type. Uniform scaling creates shards of equal size.
sourcepub fn stream_arn(self, input: impl Into<String>) -> Self
pub fn stream_arn(self, input: impl Into<String>) -> Self
The ARN of the stream.
sourcepub fn set_stream_arn(self, input: Option<String>) -> Self
pub fn set_stream_arn(self, input: Option<String>) -> Self
The ARN of the stream.
Trait Implementations§
source§impl Clone for UpdateShardCount
impl Clone for UpdateShardCount
source§fn clone(&self) -> UpdateShardCount
fn clone(&self) -> UpdateShardCount
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read more