proto: serialize and dedupe dynamic filters#20416
proto: serialize and dedupe dynamic filters#20416jayshrivastava wants to merge 3 commits intoapache:mainfrom
Conversation
158f2cf to
e0c6be3
Compare
NGA-TRAN
left a comment
There was a problem hiding this comment.
Glad you tracked down the root cause. Great explanation of the two issue.
The changes align well with your description, and the tests look solid. I’ll let the folks who know the Dynamic Filtering details comment further on the specifics.
|
|
||
| /// An atomic snapshot of a [`DynamicFilterPhysicalExpr`] used to reconstruct the expression during | ||
| /// serialization / deserialization. | ||
| pub struct DynamicFilterSnapshot { |
There was a problem hiding this comment.
Added this type because I thought it looked cleaner / simpler than having getters on the DynamicFilterPhysicalExpr itself.
There was a problem hiding this comment.
Maybe we should just add the getters and setters for the internal fields...
There was a problem hiding this comment.
It seems like we end up exposing all of the internals anyway: if there was a way to return a dyn Serializable it'd be one thing, but the proto machinery still needs to know all of the fields to serialize.
There was a problem hiding this comment.
I also think the name snapshot makes it sounds like this is what would be returned from PhysicalExpr::snapshot, but that is not the case.
There was a problem hiding this comment.
Could be... you kind of have already this, just to an intermediate structure instead
gene-bordegaray
left a comment
There was a problem hiding this comment.
overall looks good, had one idea as to making the api more familiar to those new with it. Let me know what you think
5d99a1f to
68e3f72
Compare
There was a problem hiding this comment.
Overall, I think we should find alternative ways of doing this that do not imply special-casing the dynamic filter serialization process.
I shared some ideas on how to do this, but they probably need to be fleshed out a bit more.
The things that I think this PR should be achieving is:
- No special casing for dynamic filters in serialization/deserialization code
- No changes to global protobuf messages just for the sake of dynamic filters, just a new normal entry in the enum
PhysicalDynamicFilterNode
Things that IMO we should have, probably in a separate PR:
- Stop playing with raw pointer addresses in dynamic filters and assign proper unique identifiers, probably using the
uuidcrate.
c5d0e2f to
fef4259
Compare
Fixups for the cherry-picked commits from PRs apache#19437, apache#20037, apache#20416, and #2 to work with branch-52's partition-index APIs: - Update remap_children callers to use instance method signature - Adapt DynamicFilterUpdate::Global enum for new code paths - Add missing partitioned_exprs/runtime_partition fields to new constructors - Remove null_aware field (not on branch-52) - Replace FilterExecBuilder with FilterExec::try_new - Remove non-compiling tests that depend on upstream-only APIs - Fix duplicate imports in roundtrip test file Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Fixups for the cherry-picked commits from PRs apache#19437, apache#20037, apache#20416, and jayshrivastava#2 to work with branch-52's partition-index APIs: - Update remap_children callers to use instance method signature - Adapt DynamicFilterUpdate::Global enum for new code paths - Add missing partitioned_exprs/runtime_partition fields to new constructors - Remove null_aware field (not on branch-52) - Replace FilterExecBuilder with FilterExec::try_new - Remove non-compiling tests that depend on upstream-only APIs - Fix duplicate imports in roundtrip test file Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
cb23b01 to
18b0289
Compare
d75e7f8 to
e0ec773
Compare
d64c33d to
b419d4c
Compare
Informs: datafusion-contrib/datafusion-distributed#180 Closes: apache#20418 Consider you have a plan with a `HashJoinExec` and `DataSourceExec` ``` HashJoinExec(dynamic_filter_1 on a@0) (...left side of join) ProjectionExec(a := Column("a", source_index)) DataSourceExec ParquetSource(predicate = dynamic_filter_2) ``` You serialize the plan, deserialize it, and execute it. What should happen is that the dynamic filter should "work", meaning: 1. When you deserialize the plan, both the `HashJoinExec` and `DataSourceExec` should have pointers to the same `DynamicFilterPhysicalExpr` 2. The `DynamicFilterPhysicalExpr` should be updated during execution by the `HashJoinExec` and the `DataSourceExec` should filter out rows This does not happen today for a few reasons, a couple of which this PR aims to address 1. `DynamicFilterPhysicalExpr` is not survive round-tripping. The internal exprs get inlined (ex. it may be serialized as `Literal`) due to the `PhysicalExpr::snapshot()` API 2. Even if `DynamicFilterPhysicalExpr` survives round-tripping, the one pushed down to the `DataSourceExec` often has different children. In this case, you have two `DynamicFilterPhysicalExpr` which do not survive deduping, causing referential integrity to be lost. This PR aims to fix those problems by: 1. Removing the `snapshot()` call from the serialization process 2. Adding protos for `DynamicFilterPhysicalExpr` so it can be serialized and deserialized 3. Adding a new concept, a `PhysicalExprId`, which has two identifiers, a "shallow" identifier to indicate two equal expressions which may have different children, and an "exact" identifier to indicate two exprs that are exactly the same. 4. Updating the deduping deserializer and protos to now be aware of the new "shallow" id, deduping exprs which are the same but have different children accordingly. This change adds tests which roundtrip dynamic filters and assert that referential integrity is maintained.
b419d4c to
dc683d3
Compare
## Which issue does this PR close? <!-- We generally require a GitHub issue to be filed for all bug fixes and enhancements and this helps us generate change logs for our releases. You can link an issue to this PR using the GitHub syntax. For example `Closes #123` indicates that this PR will close issue #123. --> N/A ## Rationale for this change <!-- Why are you proposing this change? If this is already explained clearly in the issue then this section is not needed. Explaining clearly why changes are proposed helps reviewers understand your changes and offer better suggestions for fixes. --> Some PRs are being omitted from stale check because they were in a cache, and the workflow appears to not have permission to delete cache so they are forever stuck as unprocessed. For example in this run: https://github.com/apache/datafusion/actions/runs/24756695077/job/72431314533 Seeing this in logs: ``` [apache#20473] issue skipped due being processed during the previous run [apache#20460] pull request skipped due being processed during the previous run [apache#20448] issue skipped due being processed during the previous run [apache#20443] issue skipped due being processed during the previous run [apache#20435] issue skipped due being processed during the previous run [apache#20418] issue skipped due being processed during the previous run [apache#20417] pull request skipped due being processed during the previous run [apache#20416] pull request skipped due being processed during the previous run [apache#20403] pull request skipped due being processed during the previous run ``` And at the end we see this warning: ``` Warning: Error delete _state: [403] Resource not accessible by integration - https://docs.github.com/rest/actions/cache#delete-github-actions-caches-for-a-repository-using-a-cache-key ``` stale workflow uses a cache in case it hits the `operations-per-run` limit meant to prevent API rate limiting (we have default of 30), so it seems we previously hit this limit and some issues/PRs were cached, and have never been uncached since so are never processed again. See: https://github.com/actions/stale#operations-per-run ## What changes are included in this PR? <!-- There is no need to duplicate the description in the issue here but it is sometimes worth providing a summary of the individual changes in this PR. --> Give permission to stale workflow to run github actions (like delete cache). See recommended permissions: https://github.com/actions/stale#recommended-permissions ## Are these changes tested? <!-- We typically require tests for all PRs in order to: 1. Prevent the code from being accidentally broken by subsequent changes 2. Serve as another way to document the expected behavior of the code If tests are not included in your PR, please explain why (for example, are they covered by existing tests)? --> ## Are there any user-facing changes? <!-- If there are user-facing changes then we may require documentation to be updated before approving the PR. --> <!-- If there are any breaking changes to public APIs, please add the `api change` label. -->
Which issue does this PR close?
Informs: datafusion-contrib/datafusion-distributed#180
Informs: #21207
Closes: #20418
Rationale for this change
Consider you have a plan with a
HashJoinExecandDataSourceExecYou serialize the plan, deserialize it, and execute it. What should happen is that the dynamic filter should "work", meaning:
HashJoinExecandDataSourceExecshould have pointers to the sameDynamicFilterPhysicalExprDynamicFilterPhysicalExprshould be updated during execution by theHashJoinExecand theDataSourceExecshould filter out rowsThis does not happen today for a few reasons, a couple of which this PR aims to address
DynamicFilterPhysicalExpris not survive round-tripping. The internal exprs get inlined (ex. it may be serialized asLiteral) due to thePhysicalExpr::snapshot()APIDynamicFilterPhysicalExprsurvives round-tripping, the one pushed down to theDataSourceExecoften has different children. In this case, you have twoDynamicFilterPhysicalExprwhichdo not survive deduping, causing referential integrity to be lost.
This PR aims to fix those problems by:
snapshot()call from the serialization processDynamicFilterPhysicalExprso it can be serialized and deserializedPhysicalExprId, which has two identifiers,a "shallow" identifier to indicate two equal expressions which may
have different children, and an "exact" identifier to indicate two
exprs that are exactly the same.
new "shallow" id, deduping exprs which are the same but have
different children accordingly.
This change adds tests which roundtrip dynamic filters and assert that
referential integrity is maintained.
Future work:
HashJoinExecand otherExecutionPlans which produce dynamic filtersPhysicalExtensionCodectrait so implementors can utilize deduping logicAre these changes tested?
Yes. This change adds tests which roundtrip dynamic filters and assert that referential integrity is maintained.
Are there any user-facing changes?
snapshot()onPhysicalExprduring serialization anymore. This means thatDynamicFilterPhysicalExprare now serialized and deserialized without snapshotting