You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The `Table[T]` type provides a high-level, type-safe interface for managing DynamoDB tables. It abstracts away much of the boilerplate required for table lifecycle management, making it easier to work with DynamoDB in Go.
@@ -60,8 +69,6 @@ The `Table[T]` type provides a high-level, type-safe interface for managing Dyna
60
69
61
70
These features allow you to manage the full lifecycle of your DynamoDB tables in a concise, idiomatic Go style, while leveraging the full power of DynamoDB's management capabilities.
62
71
63
-
> **Note:** Only table level scans are supported at the moment.
64
-
65
72
> **Note:** Table schema updates (such as adding or modifying attributes, indexes, or throughput settings after table creation) are not supported at this time. Only the table management functions listed above are available. Support for table updates is planned for a future release.
66
73
67
74
## Item Operations
@@ -76,35 +83,37 @@ The `Table[T]` type provides a set of strongly-typed methods for common item-lev
76
83
-`UpdateItem(ctx, item, ...) (*T, error)`: Update an existing item, using the struct as the source of changes.
77
84
-`DeleteItem(ctx, item, ...) error`: Delete an item by providing the struct value.
78
85
-`DeleteItemByKey(ctx, key, ...) error`: Delete an item by its key.
79
-
-`Scan(ctx, expr, ...) iter.Seq[ItemResult[T]]`: Scan the table with a filter expression, returning an iterator over results.
80
-
-`ScanIndex(ctx, indexName, expr, ...) iter.Seq[ItemResult[T]]`: Scan the index with a filter expression, returning an iterator over results.
81
-
-`Query(ctx, expr, ...) iter.Seq[ItemResult[T]]`: Query the table or an index using a key condition expression, returning an iterator over results.
82
-
-`QueryIndex(ctx, indexName, expr, ...) iter.Seq[ItemResult[T]]`: Query the index or an index using a key condition expression, returning an iterator over results.
86
+
-`Scan(ctx, expr, ...) iter.Seq[ItemResult[*T]]`: Scan the table with a filter expression, returning an iterator over results.
87
+
-`ScanIndex(ctx, indexName, expr, ...) iter.Seq[ItemResult[*T]]`: Scan the index with a filter expression, returning an iterator over results.
88
+
-`Query(ctx, expr, ...) iter.Seq[ItemResult[*T]]`: Query the table or an index using a key condition expression, returning an iterator over results.
89
+
-`QueryIndex(ctx, indexName, expr, ...) iter.Seq[ItemResult[*T]]`: Query the index or an index using a key condition expression, returning an iterator over results.
83
90
84
91
**Batch operations:**
85
92
86
93
-`CreateBatchWriteOperation() *BatchWriteOperation[T]`: Returns a new batch write operation, allowing you to queue multiple put and delete requests and execute them efficiently in batches. Handles chunking, retries for unprocessed items, and respects DynamoDB's batch size limits.
87
94
- Use `AddPut(item *T)` or `AddRawPut(map[string]types.AttributeValue)` to queue items for writing.
88
95
- Use `AddDelete(item *T)` or `AddRawDelete(map[string]types.AttributeValue)` to queue items for deletion.
89
96
- Call `Execute(ctx, ...)` to perform the batch write.
97
+
- Use `Merge(otherBatchers...)` on a `BatchWriteOperation` to create a `BatchWriteExecutor` that can write to multiple tables in a single coordinated workflow.
90
98
91
99
-`CreateBatchGetOperation() *BatchGetOperation[T]`: Returns a new batch get operation, allowing you to queue multiple keys for retrieval and execute them in a single batch request. Handles chunking, retries for unprocessed keys, and respects DynamoDB's batch size limits.
92
100
- Use `AddReadItem(item *T)` or `AddReadItemByMap(map[string]types.AttributeValue)` to queue keys for retrieval.
93
101
- Call `Execute(ctx, ...)` to perform the batch get, which yields results as an iterator.
102
+
- Use `Merge(otherBatchers...)` on a `BatchGetOperation` to create a `BatchGetExecutor` that can read from multiple tables in a single `BatchGetItem` workflow.
94
103
95
104
Batch operations are useful for efficiently processing large numbers of items, minimizing network calls, and handling DynamoDB's batch constraints automatically.
96
105
97
106
These methods are designed to be ergonomic and safe, leveraging Go's type system to reduce boilerplate and runtime errors when working with DynamoDB items.
98
107
99
108
**Iterators and ItemResult:**
100
109
101
-
Many methods, such as `Scan`, `Query`, and `BatchGetOperation.Execute`, return an iterator in the form of an `iter.Seq[ItemResult[T]]`, which is a function that accepts a callback. Each callback invocation receives an `ItemResult[T]` containing either a successfully decoded item or an error encountered during retrieval or decoding.
110
+
Many methods, such as `Scan`, `Query`, and `BatchGetOperation.Execute`, return an iterator in the form of an `iter.Seq[ItemResult[*T]]`, which is a function that accepts a callback. Each callback invocation receives an `ItemResult[*T]` containing either a successfully decoded item (accessible via `Item()`) or an error encountered during retrieval or decoding (accessible via `Error()`). For batch operations that may span multiple tables, `ItemResult` also exposes a `Table()` method that returns the source table name for the item.
102
111
103
112
When consuming these iterators, use the callback or range pattern and always check the `Error()` method on each result before using the item:
104
113
105
114
```go
106
115
// Callback-based iteration (idiomatic for iter.Seq):
@@ -126,6 +135,74 @@ for res := range table.Scan(ctx, expr, ...) {
126
135
127
136
This pattern ensures robust error handling and makes it easy to process large result sets efficiently and safely.
128
137
138
+
### Advanced: merged batch operations
139
+
140
+
You can merge batch operations from multiple tables and execute them together. This is useful when you want to minimize network calls and still keep type-safe table APIs.
In these advanced scenarios, the merge APIs (`Merge`) let you coordinate multi-table operations while still using the high-level, type-safe table abstractions provided by this package. Due to Go generics limitations, merged executors always return `ItemResult[any]`, so you must type-assert each item, typically by switching on `res.Table()` and then asserting the concrete type of `res.Item()`.
205
+
129
206
## Extensions
130
207
131
208
The entity manager supports an extension system that allows you to inject custom logic at key points in the item lifecycle. Extensions can be used for auditing, validation, automatic field population, versioning, atomic counters, and more.
@@ -255,20 +332,29 @@ The default extension registry includes several built-in extensions that provide
255
332
-**How it works:**
256
333
- Fields with the `version` tag option are checked and incremented on each write. If the version in the database does not match the expected value, the write fails, preventing lost updates.
257
334
258
-
These extensions are automatically included when you use`DefaultExtensionRegistry`:
335
+
These extensions are automatically included by default via`DefaultExtensionRegistry` when you create a new table, but you can still override or customize the registry if needed:
You can also clone and customize the registry to add or remove extensions as needed for your application.
270
356
271
-
**Note:** Because extensions can modify how your data is processed, the extension registry is not enabled by default. Enable it explicitly if you want to use these features.
357
+
**Note:** Because extensions can modify how your data is processed, they are enabled by default via `DefaultExtensionRegistry`. If you need different behavior, provide a custom registry with `WithExtensionRegistry` when creating the table.
0 commit comments