Skip to content

Commit 6c97c86

Browse files
authored
Merge pull request #12 from cpriti-os/simplify-parallel-upload-docs-13523507583222314297
refactor: move EnableParallelUpload documentation to doc.go
2 parents c44e3f2 + c51ccc9 commit 6c97c86

2 files changed

Lines changed: 36 additions & 23 deletions

File tree

storage/doc.go

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -416,6 +416,39 @@ apply to single-shot uploads when user-provided checksum is provided.
416416
417417
Automatic checksumming can be disabled using [Writer.DisableAutoChecksum].
418418
419+
# Parallel Uploads
420+
421+
The parallel upload feature splits a large object into multiple parts and uploads them
422+
in parallel. It is supported exclusively for gRPC clients. If used with a JSON
423+
client, the configuration is ignored and a standard upload is performed.
424+
425+
Parallel uploads can yield higher throughput when uploading large objects.
426+
However, there are several things which must be kept in mind when choosing to
427+
use this strategy:
428+
- Performing parallel uploads may incur additional costs. Class A
429+
operations are performed to create each part. If a storage
430+
class other than STANDARD is used, early deletion fees apply to deletion of
431+
the parts.
432+
- The service account/credentials used to perform the parallel
433+
upload require `storage.objects.delete` in order to clean up the temporary
434+
part objects.
435+
- A failed upload can leave part objects behind
436+
which will count as storage usage, and you will be billed for it.
437+
Upon completion or failure of a parallel upload, the Writer makes a
438+
best-effort attempt to clean up any temporary parts created. However, if the
439+
program crashes there is no means for the client to perform the cleanup.
440+
Temporary parts have the prefix: "gcs-go-sdk-pu-tmp". It is recommended to
441+
set appropriate bucket lifecycle policies to reliably clean up any leftover
442+
objects to avoid unnecessary storage costs.
443+
- Using parallel uploads is not a one-size-fits-all solution.
444+
They introduce overhead that is only offset when uploading
445+
sufficiently large objects. The optimal threshold depends on many
446+
factors; therefore, you should experiment with your specific
447+
workload to determine if parallel uploads provide a benefit.
448+
449+
**Note:** This feature is currently experimental and its API surface may change
450+
in future releases. It is not yet recommended for production use.
451+
419452
# Storage Control API
420453
421454
Certain control plane and long-running operations for Cloud Storage (including Folder

storage/writer.go

Lines changed: 3 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -183,29 +183,9 @@ type Writer struct {
183183
// parallel. Supported exclusively for gRPC clients. If used with a JSON
184184
// client, the configuration is ignored and a standard upload is performed.
185185
//
186-
// Parallel uploads can yield higher throughput when uploading
187-
// large objects. However, there are some things which must be kept in mind
188-
// when choosing to use this strategy:
189-
// - Performing parallel uploads may incur additional costs. Class A
190-
// operations are performed to create each part. If a storage
191-
// class other than STANDARD is used, early deletion fees apply to deletion of
192-
// the parts.
193-
// - The service account/credentials used to perform the parallel
194-
// upload require `storage.objects.delete` in order to clean up the temporary
195-
// part objects.
196-
// - A failed upload can leave part objects behind
197-
// which will count as storage usage, and you will be billed for it.
198-
// Upon completion or failure of a parallel upload, the Writer makes a
199-
// best-effort attempt to clean up any temporary parts created. However, if the
200-
// program crashes there is no means for the client to perform the cleanup.
201-
// Temporary parts have the prefix: "gcs-go-sdk-pu-tmp". It is recommended to
202-
// set appropriate bucket lifecycle policies to reliably clean up any leftover
203-
// objects to avoid unnecessary storage costs.
204-
// - Using parallel uploads is not a one-size-fits-all solution.
205-
// They introduce overhead that is only offset when uploading
206-
// sufficiently large objects. The optimal threshold depends on many
207-
// factors; therefore, you should experiment with your specific
208-
// workload to determine if parallel uploads provide a benefit.
186+
// Parallel uploads can yield higher throughput when uploading large objects,
187+
// but there are several considerations and trade-offs. Please refer to
188+
// the [Parallel Uploads] section in the package documentation for details.
209189
//
210190
// **Note:** This feature is currently experimental and its API surface may change
211191
// in future releases. It is not yet recommended for production use.

0 commit comments

Comments
 (0)