chore(deps): update pyo3 requirement from 0.25.1 to 0.26.0#106
Closed
dependabot[bot] wants to merge 1 commit intomainfrom
Closed
chore(deps): update pyo3 requirement from 0.25.1 to 0.26.0#106dependabot[bot] wants to merge 1 commit intomainfrom
dependabot[bot] wants to merge 1 commit intomainfrom
Conversation
Updates the requirements on [pyo3](https://github.com/pyo3/pyo3) to permit the latest version. - [Release notes](https://github.com/pyo3/pyo3/releases) - [Changelog](https://github.com/PyO3/pyo3/blob/main/CHANGELOG.md) - [Commits](PyO3/pyo3@v0.25.1...v0.25.1) --- updated-dependencies: - dependency-name: pyo3 dependency-version: 0.25.1 dependency-type: direct:production ... Signed-off-by: dependabot[bot] <support@github.com>
Author
|
OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting If you change your mind, just re-open this PR and I'll resolve any conflicts on it. |
fvaleye
pushed a commit
that referenced
this pull request
Oct 22, 2025
# Description This PR allow users to add multiple constraints at once using an HashMap. # Related Issue(s) <!--- For example: - closes #106 ---> - closes delta-io#1986 # Documentation <!--- Share links to useful documentation ---> --------- Signed-off-by: JustinRush80 <69156844+JustinRush80@users.noreply.github.com>
fvaleye
pushed a commit
that referenced
this pull request
Oct 27, 2025
# Description The description of the main changes of your pull request # Related Issue(s) <!--- For example: - closes #106 ---> # Documentation <!--- Share links to useful documentation ---> --------- Signed-off-by: Ion Koutsouris <15728914+ion-elgreco@users.noreply.github.com>
fvaleye
pushed a commit
that referenced
this pull request
Dec 15, 2025
# Description The description of the main changes of your pull request # Related Issue(s) <!--- For example: - closes #106 ---> # Documentation <!--- Share links to useful documentation ---> --------- Signed-off-by: Ion Koutsouris <15728914+ion-elgreco@users.noreply.github.com>
fvaleye
pushed a commit
that referenced
this pull request
Feb 1, 2026
…ks it (delta-io#3941) # Description explicitly describe filesystem_check does fix and not only checks it # Related Issue(s) <!--- For example: - closes #106 ---> # Documentation <!--- Share links to useful documentation --->
fvaleye
pushed a commit
that referenced
this pull request
Feb 1, 2026
# Description The description of the main changes of your pull request # Related Issue(s) <!--- For example: - closes #106 ---> # Documentation <!--- Share links to useful documentation ---> Signed-off-by: Ion Koutsouris <15728914+ion-elgreco@users.noreply.github.com>
fvaleye
pushed a commit
that referenced
this pull request
Feb 1, 2026
# Description The description of the main changes of your pull request # Related Issue(s) <!--- For example: - closes #106 ---> # Documentation <!--- Share links to useful documentation ---> Signed-off-by: Ion Koutsouris <15728914+ion-elgreco@users.noreply.github.com>
fvaleye
pushed a commit
that referenced
this pull request
Feb 1, 2026
…lta-io#4112) # Description Fix next-scan execution when upstream coalescing produces batches with rows from multiple files. Changes: - Split incoming batches into contiguous file_id runs before applying DV masks/transforms - Buffer fan-out outputs via VecDeque to preserve row order - Return `internal_datafusion_err!` on unexpected file_id column type instead of panicking - Add tests for interleaved file IDs, fanout, and invalid/null file_id paths # Related Issue(s) <!--- For example: - closes #106 ---> # Documentation <!--- Share links to useful documentation ---> Signed-off-by: Ethan Urbanski <ethan@urbanskitech.com>
fvaleye
pushed a commit
that referenced
this pull request
Feb 1, 2026
…delta-io#4127) # Description After the DataFusion 52 upgrade, DELETE and UPDATE operations fail with: `Invalid comparison operation: Utf8 == Utf8View` Error: `Arrow error: Invalid comparison operation: Utf8 == Utf8View` This occurs when using string equality predicates like table.delete().with_predicate("name = 'foo'"). ## Root Cause: With DataFusion 52, Parquet string/binary columns may be represented as view types (`Utf8View`/`BinaryView`) when `datafusion.execution.parquet.schema_force_view_types` is enabled (default `true`). DELETE/UPDATE were resolving predicates and SET expressions against `snapshot.arrow_schema()` (base `Utf8`/`Binary`), but executing against the scan/provider schema (which can include view types). This mismatch caused filter evaluation errors like `Utf8 == Utf8View` (and similarly `Binary == BinaryView`). ## Fix: Resolve predicates against DeltaScanConfig::table_schema() instead of snapshot.arrow_schema(). This ensures predicate literals are coerced to match the actual execution schema (view types when enabled, base types when disabled). ## Changes: - delete.rs: Use DeltaScanConfig::table_schema() for predicate resolution - update.rs: Use DeltaScanConfig::table_schema() for both SET expressions and WHERE predicates - expr.rs: Add ScalarValue::Dictionary support to fmt_expr_to_sql (partition columns are dictionary-encoded in execution schema) ## Tests: - test_delete_string_equality_utf8view_regression_4125 - test_update_string_equality_non_partition - test_delete_partition_string_predicate_dictionary_formatting - test_delete_binary_equality_non_partition - test_delete_custom_session_schema_force_view_types_disabled - Dictionary scalar formatting unit tests in expr.rs # Related Issue(s) - Fixes delta-io#4125 <!--- For example: - closes #106 ---> # Documentation <!--- Share links to useful documentation ---> --------- Signed-off-by: Ethan Urbanski <ethan@urbanskitech.com> Co-authored-by: R. Tyler Croy <rtyler@brokenco.de>
fvaleye
pushed a commit
that referenced
this pull request
Feb 1, 2026
…elta-io#4142) # Description - Add a runtime version check in __datafusion_table_provider__ to prevent FFI ABI mismatch segfaults - Block capsule export when installed datafusion major != 52 - Provide actionable error text with QueryBuilder workaround Changes: - lib.rs: add REQUIRED_DATAFUSION_PY_MAJOR, datafusion_python_version(), guard at method start - test_datafusion.py: add incompatible version and not installed tests Note: This guard is a temporary safety net to prevent segfaults until DataFusion 52 Python wheels are available on PyPI. Once wheels land, users can install datafusion==52.* and use SessionContext registration normally. # Related Issue(s) - delta-io#4135 <!--- For example: - closes #106 ---> # Documentation <!--- Share links to useful documentation ---> Signed-off-by: Ethan Urbanski <ethan@urbanskitech.com>
fvaleye
pushed a commit
that referenced
this pull request
Feb 1, 2026
…down (delta-io#4144) # Description Fix next-scan Parquet predicate pushdown when `parquet_read_schema` uses Arrow view types (`Utf8View` / `BinaryView`). DataFusion may replace constant columns with literals using `PartitionedFile.statistics`. If those scalars are base-typed while the scan schema is view-typed, DataFusion errors: ``` expected Utf8View but found Utf8 ``` **Changes:** - Align per file `PartitionedFile.statistics` scalar types to the Parquet read schema. Now DF's constant column/literal replacement doesn't produce base typed arrays under a view typed schema - Build Parquet pushdown predicates via DF session state (`create_physical_expr`) to get canonical type coercion/rewrites - Add focused unit tests for stats alignment behavior (string/binary view conversions, dictionary inner types, length-mismatch policy) **Notes:** Scoped to next-scan, predicate pushdown, and stats typing only. No write path changes cc @roeap # Related Issue(s) <!--- For example: - closes #106 ---> # Documentation <!--- Share links to useful documentation ---> --------- Signed-off-by: Ethan Urbanski <ethan@urbanskitech.com>
fvaleye
pushed a commit
that referenced
this pull request
Feb 1, 2026
…ies (delta-io#4145) # Description Make delta-rs' DataFusion integration consistently honor caller provided sessions and introduce a session first API for registering Delta object stores. **Changes:** Session resolution: - Add `resolve_session_state(...)` & `SessionFallbackPolicy` (`InternalDefaults` / `DeriveFromTrait` / `RequireSessionState`) - Builders expose `with_session_fallback_policy(...)` to control strictness - Migrate operations to use resolver (optimize/merge/update/write/delete) Session first registration: - Add `DeltaSessionExt` trait: - `ensure_object_store_registered(...)` - `ensure_log_store_registered(...)` - Deprecate `DeltaTable::update_datafusion_session(...)` (shim kept for compatibility) Predicate parsing: - Non-`SessionState` sessions can preserve UDFs when configured via `DeriveFromTrait` **Compatibility:** - Default is backward compatible: `SessionFallbackPolicy::InternalDefaults` warns but doesn't break - Strict mode available: `with_session_fallback_policy(RequireSessionState)` errors instead of falling back - `DeltaTable::update_datafusion_session` remains but is deprecated **Tests:** - Regression tests for fallback policy wiring across builders (`RequireSessionState` path) - Existing `deltalake-core` DataFusion test suite passes with `--features datafusion` # Related Issue(s) **Addresses:** - delta-io#4081 - delta-io#4139 <!--- For example: - closes #106 ---> # Documentation <!--- Share links to useful documentation ---> # Follow ups: - Flip default to `RequireSessionState` (breaking change) - Remove deprecated `DeltaTable::update_datafusion_session` after deprecation window --------- Signed-off-by: Ethan Urbanski <ethan@urbanskitech.com>
fvaleye
pushed a commit
that referenced
this pull request
Feb 10, 2026
# Description Fixes delta-io#4156 where `write_deltalake(..., schema_mode="merge")` could fail during planning on append when the input omits a nullable, non-generated column in a table that has generated columns. **What changed:** - `with_generated_columns()` now preserves the input projection and only appends expressions for generated columns that are missing from the input. # Related Issue(s) - delta-io#4156 <!--- For example: - closes #106 ---> # Documentation <!--- Share links to useful documentation ---> **Note** Scope generated column planning to generated columns only. Missing non-generated columns are left to schema evolution (SchemaMode::Merge) and filled with NULL when nullable. --------- Signed-off-by: Ethan Urbanski <ethan@urbanskitech.com>
fvaleye
pushed a commit
that referenced
this pull request
Feb 10, 2026
# Description Fix kernel to DataFusion column expression conversion to preserve exact `ColumnName` path segments. Fixes delta-io#4082 **Fix:** Use DataFusion `ident(...)` for the base column segment when converting `Expression::Column`, then `.field(...)` for remaining path segments. Preserves exact segment names, avoids SQL style normalization. # Related Issue(s) - closes delta-io#4082 <!--- For example: - closes #106 ---> # Documentation <!--- Share links to useful documentation ---> --------- Signed-off-by: Ethan Urbanski <ethan@urbanskitech.com>
fvaleye
pushed a commit
that referenced
this pull request
Feb 10, 2026
# Description The synthetic `file_id` column type was defined independently allowing silent drift between `Int32` and `UInt16`. This causes type coercion failures in DML paths that build `file_id IN (...)` predicates. **Changes:** Adds `file_id.rs` as single source of truth for the column name, data type, field constructor, and dictionary aligned literal wrapper. All call sites now go through these helpers. Adds a guard to chunk `FileGroup`s so the per-group partition dictionary cannot exceed the `UInt16` keyspace, with two counters (`count_file_groups_planned`, `count_file_group_chunks`) to observe when chunking triggers. `file_id` remains an internal correlation mechanism. This reduces its surface area and makes removal easier once `ParquetAccessPlan` based scan filtering lands. # Related Issue(s) Related: - delta-io#4113 - delta-io#4115 <!--- For example: - closes #106 ---> # Documentation <!--- Share links to useful documentation ---> --------- Signed-off-by: Ethan Urbanski <ethan@urbanskitech.com>
fvaleye
pushed a commit
that referenced
this pull request
Feb 10, 2026
…emoval (delta-io#4172) # Description DML operations scope scans to specific files via `DeltaTableProvider::with_files(Vec<Add>)`. The next provider (`DeltaScan`) has no equivalent. This PR adds that foundation. Introduces `FileSelection` a `pub(crate)` struct holding normalized file IDs (`HashSet<String>`) that scopes scans to specific files. Unselected files are excluded in the replay phase before DV tasks or data reads are produced. No callers yet and will be added when DML operations migrate. **Migration path** 1. This PR: introduce `FileSelection`, wire into `DeltaScan` pipeline 2. Convert DML ops from `DeltaTableProvider::with_files(Vec<Add>)` to `DeltaScan::with_file_selection(FileSelection::from_adds(...))` 3. Convert optimize's partition filtered scans to `FileSelection::from_paths` 4. Remove `DeltaTableProvider` and `with_files` CC: @roeap # Related Issue(s) <!--- For example: - closes #106 ---> # Documentation --------- Signed-off-by: Ethan Urbanski <ethan@urbanskitech.com>
fvaleye
pushed a commit
that referenced
this pull request
Feb 10, 2026
# Description Adds `DeltaTable.deletion_vectors() -> RecordBatchReader` returning one row per data file with a deletion vector. Schema: `filepath: utf8`, `selection_vector: list[bool]` (true = keep, false = deleted). Reuses the existing DataFusion replay path via `replay_deletion_vectors(...)`. Results are deterministic and sorted by filepath. **Core changes:** - `DeletionVectorSelection` struct, `DeltaScan::deletion_vectors()`, shared `scan_metadata_stream()` helper to avoid drift between scan paths - Replaced internal DV `expect(...)` with typed error propagation **Python binding:** - `cloned_table_and_state()` to avoid TOCTOU on table + snapshot - Chunked Arrow batch output with non-nullable list items - Preserves `without_files` guard behavior # Related Issue(s) - Closes delta-io#4159 <!--- For example: - closes #106 ---> # Documentation <!--- Share links to useful documentation ---> cc @ion-elgreco --------- Signed-off-by: Ethan Urbanski <ethan@urbanskitech.com>
fvaleye
pushed a commit
that referenced
this pull request
Feb 10, 2026
# Description The description of the main changes of your pull request # Related Issue(s) <!--- For example: - closes #106 ---> # Documentation <!--- Share links to useful documentation ---> Signed-off-by: Ion Koutsouris <15728914+ion-elgreco@users.noreply.github.com>
rtyler
pushed a commit
that referenced
this pull request
Feb 16, 2026
…#4191) # Description Fixes a regression where `schema_mode="merge"` appends strip `delta.generationExpression` from the table schema. Once lost, subsequent writes compute NULL instead of generated values. # Related Issue(s) - closes delta-io#4186 <!--- For example: - closes #106 ---> # Documentation <!--- Share links to useful documentation ---> --------- Signed-off-by: Ethan Urbanski <ethan@urbanskitech.com>
rtyler
pushed a commit
that referenced
this pull request
Feb 16, 2026
# Description Follow up to the DF 52 upgrade. Replaces custom scan batch casting logic with DataFusion's `BatchAdapterFactory` via `datafusion-physical-expr-adapter`. Adds hardening tests for schema evolution and DV scan behavior Keeps DF default behavior with no custom compatibility mode # Related Issue(s) <!--- For example: - closes #106 ---> # Documentation <!--- Share links to useful documentation ---> --------- Signed-off-by: Ethan Urbanski <ethan@urbanskitech.com>
rtyler
pushed a commit
that referenced
this pull request
Feb 16, 2026
# Description Fix regression where `DELETE` with partition only predicates failed to remove empty files in matching partitions. **Root Cause:** When the delete predicate references only partition columns, file removal should be decided from log metadata alone. Prior implementation relied on scan derived results and empty files (zero rows) produced no matches. They weren't removed even though their partition satisfied the predicate. **Changes:** - Add partition only delete fast path: evaluates predicate against `Add.partition_values` (metadata only) - Refine partition only predicate detection, remove redundant validation - Add `remove_from_add` helper for removing files via their `Add` actions - Add helper for partition predicate file finding **Tests:** Regression coverage for: - Partition only deletes remove empty files in matching partitions (repro for delta-io#4149) - NULL partition values handled correctly - Partition-only path avoids unnecessary scanning ```bash cargo fmt --check cargo test --workspace cargo test -p deltalake-core --features datafusion ``` # Related Issue(s) - Fixes delta-io#4149 <!--- For example: - closes #106 ---> # Documentation <!--- Share links to useful documentation ---> Notes - Scoped to partition only predicates; intended semantics preserved - NULL partition handling explicitly tested --------- Signed-off-by: Ethan Urbanski <ethan@urbanskitech.com>
rtyler
pushed a commit
that referenced
this pull request
Feb 16, 2026
) # Description Follow up to the DF scan adapter integration. Adds a cached schema adapter path in scan execution to avoid rebuilding adapters for repeated source schemas. Updates scan finalization to use the cached path. Fixes DV mask handling edge cases with perserving remainder across multi batch file reads, pads short masks with implicit true, errors when mask length exceeds batch row count. # Related Issue(s) <!--- For example: - closes #106 ---> # Documentation <!--- Share links to useful documentation ---> --------- Signed-off-by: Ethan Urbanski <ethan@urbanskitech.com>
rtyler
pushed a commit
that referenced
this pull request
Feb 16, 2026
) # Description In 1.3.2 when we perform delete with partition only predicate, when we try to find_files() -> it calls get_add_action() which concatenates all file metadata into one giant RecordBatch. Arrow's StringArray uses 32-bit offsets, so a single array can hold at most ~2GB of string data. When the table is huge which cannot be held into single array. if approved this will be back ported to 1.3.3 # Related Issue(s) <!--- For example: - closes #106 ---> # Documentation <!--- Share links to useful documentation ---> --------- Signed-off-by: vsmanish1772 <smanish1772@gmail.com> Signed-off-by: Ethan Urbanski <ethan@urbanskitech.com> Co-authored-by: Ethan Urbanski <ethanurbanski@gmail.com>
rtyler
pushed a commit
that referenced
this pull request
Feb 16, 2026
# Description clean up from DF52 upgrade # Related Issue(s) <!--- For example: - closes #106 ---> # Documentation <!--- Share links to useful documentation ---> Signed-off-by: Ethan Urbanski <ethan@urbanskitech.com>
fvaleye
pushed a commit
that referenced
this pull request
Feb 28, 2026
# Description Upgrades the Python DataFusion path to 52.x and makes the integration lane blocking in CI # Related Issue(s) <!--- For example: - closes #106 ---> # Documentation <!--- Share links to useful documentation ---> --------- Signed-off-by: Ethan Urbanski <ethan@urbanskitech.com>
fvaleye
pushed a commit
that referenced
this pull request
Feb 28, 2026
# Description Vacuum Lite Mode only deletes Stale Tombstone files, but current implementation does full file listing regardless of Lite or Full mode. This change avoids listing storage for Lite mode and tries to simlify and clarify logic by segregating concerns for each mode. # Related Issue(s) - closes [#106](delta-io#4228) # Documentation Added test cases to test and clarify intend --------- Signed-off-by: Khalid Mammadov <khalidmammadov9@gmail.com> Signed-off-by: R. Tyler Croy <rtyler@brokenco.de> Co-authored-by: R. Tyler Croy <rtyler@brokenco.de>
fvaleye
pushed a commit
that referenced
this pull request
Feb 28, 2026
…lta-io#4211) # Description some follow up/hardening changes from the partition only delete work done recently DELETE partition only fallback and add action evaluation could materialize all actions into a single batch, which breaks on large tables Changes: - DELETE fallback uses batched partition metadata instead of single batch materialization - Shared partition metadata MemTable builder across scan and DELETE paths - Snapshot fast path for partition only column projection - add_actions coalescing streams directly into BatchCoalescer instead of pre-collecting - Python docs note get_add_actions() return type migration <!--- For example: - closes #106 ---> # Documentation <!--- Share links to useful documentation ---> --------- Signed-off-by: Ethan Urbanski <ethan@urbanskitech.com>
fvaleye
pushed a commit
that referenced
this pull request
Feb 28, 2026
# Description Fixes delta-io#4235 - `DeltaTable.deletion_vectors()` returned truncated selection vectors when the highest deleted row index was below the file's total row count. Kernel returns a sparse DV mask (up to highest deleted index). The api returned that raw mask directly, which could be shorter than numRecords. **What Changed** - Plumb `num_records: Option<u64>` through scan replay DV side channel - Pad short masks with `true` up to `numRecords` at the API boundary - Error if mask exceeds `numRecords` or `numRecords` is missing This is now a stricter contract with `deletion_vectors()` now failing if a DV file is missing `numRecords` instead of returning a truncated mask. **Upstream Kernel Note** If kernel can return full length selection vectors, this normalization will not be needed. Will look into if an upstream feature on delta-kernel is welcomed for a length aware selection vector api # Related Issue(s) - delta-io#4235 <!--- For example: - closes #106 ---> # Documentation Signed-off-by: Ethan Urbanski <ethan@urbanskitech.com>
fvaleye
pushed a commit
that referenced
this pull request
Feb 28, 2026
# Description Replaces the local `DeltaWriter` loop in `DeltaDataSink::write_all` with `operations::write::execution::write_streams(...)`. Same change applied to `write_data_plan`. # Related Issue(s) - closes: delta-io#4189 <!--- For example: - closes #106 ---> # Documentation <!--- Share links to useful documentation ---> --------- Signed-off-by: Ethan Urbanski <ethan@urbanskitech.com>
fvaleye
pushed a commit
that referenced
this pull request
Mar 10, 2026
# Description Found this while working on delta-io#4266. Merge target subset filters can retain decimal precision/scale from the source expression instead of the target schema. For example `decimal(4, 1)` when the column is `decimal(6, 1)`. The newer file skipping path rejects the mismatch. This fix is normalize `target_subset_filter` against the target schema before simplification, and extend literal coercion to handle between bounds. # Related Issue(s) <!--- For example: - closes #106 ---> # Documentation <!--- Share links to useful documentation ---> Signed-off-by: Ethan Urbanski <ethan@urbanskitech.com>
fvaleye
pushed a commit
that referenced
this pull request
Mar 15, 2026
# Description Migrate merge target scan to `DeltaScanNext`. Route the merge predicate into file skipping instead of scan filters, and match rewritten files against full file IDs. # Related Issue(s) - delta-io#4239 <!--- For example: - closes #106 ---> # Documentation <!--- Share links to useful documentation ---> --------- Signed-off-by: Ethan Urbanski <ethan@urbanskitech.com>
fvaleye
pushed a commit
that referenced
this pull request
Mar 15, 2026
# Description The description of the main changes of your pull request # Related Issue(s) <!--- For example: - closes #106 ---> # Documentation <!--- Share links to useful documentation ---> Signed-off-by: Ion Koutsouris <15728914+ion-elgreco@users.noreply.github.com>
fvaleye
pushed a commit
that referenced
this pull request
Apr 7, 2026
…delta-io#4299) # Description Tighten the Python compatibility handling in `DeltaTable.create()` and `DeltaTable.vacuum()` ensuring duplicate values are rejected when legacy positional arguments are mixed with keywords. # Related Issue(s) <!--- For example: - closes #106 ---> # Documentation <!--- Share links to useful documentation ---> Signed-off-by: Ethan Urbanski <ethan@urbanskitech.com>
fvaleye
pushed a commit
that referenced
this pull request
Apr 8, 2026
# Description - replacing manual commit pipeline (into_prepared_commit_future + manual write_commit_entry) in restore.rs into standard pipeline, all 3 stages run automatically like other endpoints - exposed `post_commithook_properties` in the Python binding, public API, and type stub - added tests verifying the parameter is accepted with post commimt hook # Related Issue(s) <!--- For example: - closes #106 ---> closes delta-io#4251 <!--- Share links to useful documentation ---> --------- Signed-off-by: Byeori Kim <bk.byeori.kim@gmail.com> Co-authored-by: Ethan Urbanski <ethanurbanski@gmail.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Updates the requirements on pyo3 to permit the latest version.
Release notes
Sourced from pyo3's releases.
Changelog
Sourced from pyo3's changelog.
... (truncated)
Commits
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase.Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebasewill rebase this PR@dependabot recreatewill recreate this PR, overwriting any edits that have been made to it@dependabot mergewill merge this PR after your CI passes on it@dependabot squash and mergewill squash and merge this PR after your CI passes on it@dependabot cancel mergewill cancel a previously requested merge and block automerging@dependabot reopenwill reopen this PR if it is closed@dependabot closewill close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot show <dependency name> ignore conditionswill show all of the ignore conditions of the specified dependency@dependabot ignore this major versionwill close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor versionwill close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependencywill close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)