Skip to content

Commit 22d0749

Browse files
authored
docs(storage): Update EnableParallelUpload documentation (#14328)
1 parent 751febd commit 22d0749

2 files changed

Lines changed: 36 additions & 3 deletions

File tree

storage/doc.go

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -416,6 +416,39 @@ apply to single-shot uploads when user-provided checksum is provided.
416416
417417
Automatic checksumming can be disabled using [Writer.DisableAutoChecksum].
418418
419+
# Parallel Uploads
420+
421+
The parallel upload feature splits a large object into multiple parts and uploads them
422+
in parallel. It is supported exclusively for gRPC clients. If used with a JSON
423+
client, the configuration is ignored and a standard upload is performed.
424+
425+
Parallel uploads can yield higher throughput when uploading large objects.
426+
However, there are several things which must be kept in mind when choosing to
427+
use this strategy:
428+
- Performing parallel uploads may incur additional costs. Class A
429+
operations are performed to create each part. If a storage
430+
class other than STANDARD is used, early deletion fees apply to deletion of
431+
the parts.
432+
- The service account/credentials used to perform the parallel
433+
upload require `storage.objects.delete` in order to clean up the temporary
434+
part objects.
435+
- A failed upload can leave part objects behind
436+
which will count as storage usage, and you will be billed for it.
437+
Upon completion or failure of a parallel upload, the Writer makes a
438+
best-effort attempt to clean up any temporary parts created. However, if the
439+
program crashes there is no means for the client to perform the cleanup.
440+
Temporary parts have the prefix: "gcs-go-sdk-pu-tmp". It is recommended to
441+
set appropriate bucket lifecycle policies to reliably clean up any leftover
442+
objects to avoid unnecessary storage costs.
443+
- Using parallel uploads is not a one-size-fits-all solution.
444+
They introduce overhead that is only offset when uploading
445+
sufficiently large objects. The optimal threshold depends on many
446+
factors; therefore, you should experiment with your specific
447+
workload to determine if parallel uploads provide a benefit.
448+
449+
**Note:** This feature is currently experimental and its API surface may change
450+
in future releases. It is not yet recommended for production use.
451+
419452
# Storage Control API
420453
421454
Certain control plane and long-running operations for Cloud Storage (including Folder

storage/writer.go

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -183,9 +183,9 @@ type Writer struct {
183183
// parallel. Supported exclusively for gRPC clients. If used with a JSON
184184
// client, the configuration is ignored and a standard upload is performed.
185185
//
186-
// Upon completion of a parallel upload, the Writer makes a best-effort attempt to clean up any temporary parts created.
187-
// It is recommended to set appropriate bucket lifecycle policies to reliably clean up any leftover objects to avoid unnecessary storage costs.
188-
// Temporary parts have the prefix: "gcs-go-sdk-pu-tmp".
186+
// Parallel uploads can yield higher throughput when uploading large objects,
187+
// but there are several considerations and trade-offs. Please refer to
188+
// the [Parallel Uploads] section in the package documentation for details.
189189
//
190190
// **Note:** This feature is currently experimental and its API surface may change
191191
// in future releases. It is not yet recommended for production use.

0 commit comments

Comments
 (0)