Follow up on #403 (comment)
Currently chunking defaults to (1, 1, Z, Y, X) for both v0.4 and v0.5 stores. For v0.5 by default we don't use sharding. It may be good to update these defaults to use smaller chunks and 1-5 GB shards for v0.5 datasets, as we've done in ngff/utils.py.
def create_zeros(
self,
name: str,
shape: tuple[int, ...],
dtype: DTypeLike,
chunks: tuple[int, ...] | None = None,
shards_ratio: tuple[int, ...] | None = None,
transform: list[TransformationMeta] | None = None,
check_shape: bool = True,
):
"""Create a new zero-filled image array in the position.
...
"""
if not chunks:
chunks = self._default_chunks(shape, 3)
if check_shape:
self._check_shape(shape)
arr_handle = self._create_zarr_array(name, shape, dtype, chunks, shards_ratio)
img_arr = ImageArray.from_handle(arr_handle, self._impl)
self._create_image_meta(name, transform=transform)
return img_arr
@staticmethod
def _default_chunks(shape, last_data_dims: int):
chunks = shape[-min(last_data_dims, len(shape)) :]
return pad_shape(chunks, target=len(shape))
Follow up on #403 (comment)
Currently chunking defaults to (1, 1, Z, Y, X) for both v0.4 and v0.5 stores. For v0.5 by default we don't use sharding. It may be good to update these defaults to use smaller chunks and 1-5 GB shards for v0.5 datasets, as we've done in
ngff/utils.py.