S3 multipart upload user guide
This page explains multipart upload and shows validated configuration examples for common S3 clients.
What multipart upload means
Multipart upload splits one large file into multiple parts, uploads those parts separately, then asks the S3 server to assemble the final object.
Benefits:
- Large files are more reliable to upload.
- Failed parts can be retried without restarting the whole upload.
- Parts can be uploaded in parallel.
- It avoids the single-request upload size limit.
Important limits:
| Limit | Value |
|---|---|
| Minimum part size | 5 MiB, except the final part |
| Maximum part size | 5 GiB |
| Maximum number of parts | 10,000 |
| Maximum AWS S3 object size | 5 TiB |
For this Ceph RGW service, use:
multipart threshold: 128 MiB
multipart part size: 128 MiB
initial concurrency: 8
Use 256 MiB parts for very large backup or dataset objects.
Warning
Do not use 5 MiB parts for large files. It works for small large-files, but it creates too many parts and can hit the 10,000-part limit.
Recommended values
| Use case | Threshold | Part size | Concurrency |
|---|---|---|---|
| Normal large uploads | 128 MiB | 128 MiB | 8 |
| Faster server-side uploads | 128 MiB | 128 MiB | 16 after testing |
| Very large files / backups | 256 MiB | 256 MiB | 4-8 |
| Multi-TiB single files | 512 MiB or 1 GiB | 512 MiB or 1 GiB | 4-8 |
Memory rule:
approximate memory = part_size * concurrency * active_transfers
Examples:
128 MiB * 8 = about 1 GiB
128 MiB * 16 = about 2 GiB
256 MiB * 8 = about 2 GiB
AWS CLI
AWS CLI S3 transfer commands support:
multipart_thresholdmultipart_chunksizemax_concurrent_requests
Set recommended defaults:
aws configure set --profile <profile-name> s3.multipart_threshold 128MB
aws configure set --profile <profile-name> s3.multipart_chunksize 128MB
aws configure set --profile <profile-name> s3.max_concurrent_requests 8
For faster uploads from a capable server:
aws configure set --profile <profile-name> s3.multipart_threshold 128MB
aws configure set --profile <profile-name> s3.multipart_chunksize 128MB
aws configure set --profile <profile-name> s3.max_concurrent_requests 16
For very large files:
aws configure set --profile <profile-name> s3.multipart_threshold 256MB
aws configure set --profile <profile-name> s3.multipart_chunksize 256MB
aws configure set --profile <profile-name> s3.max_concurrent_requests 8
Upload to Ceph RGW:
aws --endpoint-url https://<rgw-endpoint> --profile <profile-name> s3 cp large-file.tar.zst s3://<bucket>/<path>/
Check current AWS CLI config:
aws configure get --profile <profile-name> s3.multipart_threshold
aws configure get --profile <profile-name> s3.multipart_chunksize
aws configure get --profile <profile-name> s3.max_concurrent_requests
s3cmd
s3cmd supports multipart upload with:
--multipart-chunk-size-mb=SIZE
According to the s3cmd manual, files larger than SIZE are uploaded as multipart, the default chunk size is 15 MB, the minimum is 5 MB, and the maximum is 5 GB.
Recommended upload:
s3cmd --host=<rgw-hostname> --host-bucket=<rgw-hostname> \
--multipart-chunk-size-mb=128 \
put large-file.tar.zst s3://<bucket>/<path>/
For very large files:
s3cmd --host=<rgw-hostname> --host-bucket=<rgw-hostname> \
--multipart-chunk-size-mb=256 \
put very-large-file.tar.zst s3://<bucket>/<path>/
Do not use:
--disable-multipart
boto3
Use boto3.s3.transfer.TransferConfig.
Recommended example:
import boto3
from boto3.s3.transfer import TransferConfig
from botocore.config import Config
MiB = 1024 * 1024
transfer_config = TransferConfig(
multipart_threshold=128 * MiB,
multipart_chunksize=128 * MiB,
max_concurrency=8,
use_threads=True,
)
s3 = boto3.client(
"s3",
endpoint_url="https://<rgw-endpoint>",
aws_access_key_id="<access-key>",
aws_secret_access_key="<secret-key>",
config=Config(
signature_version="s3v4",
s3={"addressing_style": "path"},
),
)
s3.upload_file(
"large-file.tar.zst",
"<bucket>",
"<path>/large-file.tar.zst",
Config=transfer_config,
)
For very large files:
transfer_config = TransferConfig(
multipart_threshold=256 * MiB,
multipart_chunksize=256 * MiB,
max_concurrency=8,
use_threads=True,
)
boto3 defaults from the current customization reference include:
multipart_threshold = 8 MiB
multipart_chunksize = 8 MiB
max_concurrency = 10
use_threads = True
For this Ceph RGW service, override those defaults for large uploads.
rclone
rclone S3 multipart upload uses:
--s3-upload-cutoff--s3-chunk-size--s3-upload-concurrency--s3-max-upload-parts
rclone documents the multipart memory formula as:
--transfers * --s3-upload-concurrency * --s3-chunk-size
Recommended command:
rclone copy /data/large-file.tar.zst ceph:<bucket>/<path>/ \
--s3-upload-cutoff 128M \
--s3-chunk-size 128M \
--s3-upload-concurrency 8
For very large files:
rclone copy /data/very-large-file.tar.zst ceph:<bucket>/<path>/ \
--s3-upload-cutoff 256M \
--s3-chunk-size 256M \
--s3-upload-concurrency 4
For multiple files, keep memory in mind. Example:
rclone copy /dataset ceph:<bucket>/dataset/ \
--transfers 4 \
--s3-upload-cutoff 128M \
--s3-chunk-size 128M \
--s3-upload-concurrency 4
Memory estimate:
4 transfers * 4 upload concurrency * 128 MiB = about 2 GiB
rclone documented defaults checked:
| Option | Default |
|---|---|
--s3-upload-cutoff |
200 MiB |
--s3-chunk-size |
5 MiB |
--s3-upload-concurrency |
4 |
--s3-max-upload-parts |
10,000 |
Because the default --s3-chunk-size is 5 MiB, set it explicitly for large dataset uploads.
Choosing a part size by object size
| Object size | Minimum sane part size | Recommended part size |
|---|---|---|
| 20 GiB | 8 MiB | 128 MiB |
| 100 GiB | 16 MiB | 128 MiB |
| 1 TiB | 128 MiB | 256 MiB |
| 5 TiB | 1 GiB | 1 GiB |
The "minimum sane" size only avoids the 10,000-part limit. The recommended size reduces request and metadata overhead.
Troubleshooting
List incomplete multipart uploads:
aws --endpoint-url https://<rgw-endpoint> s3api list-multipart-uploads --bucket <bucket>
Abort one incomplete multipart upload:
aws --endpoint-url https://<rgw-endpoint> s3api abort-multipart-upload \
--bucket <bucket> \
--key <object-key> \
--upload-id <upload-id>
Common problems:
| Symptom | Likely cause | Fix |
|---|---|---|
| Upload fails around large object sizes | Too many parts | Increase part size |
| Client rejected by RGW | Part size below RGW minimum | Use 128 MiB or ask admin for current minimum |
| Client uses too much RAM | Part size/concurrency too high | Lower concurrency first |
| rclone stream upload fails for very large unknown-size input | --s3-chunk-size too small for 10,000 parts |
Increase --s3-chunk-size |
Source references
- AWS CLI S3 configuration: https://docs.aws.amazon.com/cli/latest/topic/s3-config.html
- AWS S3 multipart limits: https://docs.aws.amazon.com/AmazonS3/latest/userguide/qfacts.html
- AWS S3 multipart overview: https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html
- s3cmd usage: https://s3tools.org/usage
- s3cmd multipart FAQ: https://s3tools.org/kb/item13.htm
- boto3 S3 transfer guide: https://docs.aws.amazon.com/boto3/latest/guide/s3.html
- boto3 S3 customization reference: https://docs.aws.amazon.com/boto3/latest/reference/customizations/s3.html
- rclone S3 backend: https://rclone.org/s3/
Created: May 15, 2026