-
Notifications
You must be signed in to change notification settings - Fork 29.2k
[SPARK-55690] Schema evolution in DSv2 AppendData, OverwriteByExpression, OverwritePartitionsDynamic #54488
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
+1,041
−322
Closed
[SPARK-55690] Schema evolution in DSv2 AppendData, OverwriteByExpression, OverwritePartitionsDynamic #54488
Changes from 4 commits
Commits
Show all changes
16 commits
Select commit
Hold shift + click to select a range
b3963d0
Move ResolveMergeIntoSchemaEvolution.scala -> ResolveMergeIntoSchemaE…
johanl-db 05b32cb
Move schemaChanges from MergeIntoTable to ResolveSchemaEvolution
johanl-db 7be9d2a
Schema evolution for DSv2 INSERT
johanl-db 6bc2956
Merge branch 'master' into dsv2-schema-evolution-insert
johanl-db a3042e3
Add tests, address comments
johanl-db aeae2b4
Add tests
johanl-db 862fdb6
Fix checking catalog name in test
johanl-db 42ca213
Refactor to use single schema evolution rule + address comments
johanl-db 18f10a1
Minor improvements
johanl-db b7e303f
Merge remote-tracking branch 'spark/master' into dsv2-schema-evolutio…
johanl-db 0cc771f
Resolve conflicts from https://github.com/apache/spark/pull/54704
johanl-db 65ba49a
Address comments
johanl-db ac8fca6
Merge remote-tracking branch 'spark/master' into dsv2-schema-evolutio…
johanl-db 56ea329
Fix val->def writePrivileges override
johanl-db d777acc
Update tests, remove mergeSchema in REPLACE WHERE
johanl-db f8374b6
Address nits
johanl-db File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Some comments aren't visible on the classic Files Changed page.
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
85 changes: 0 additions & 85 deletions
85
...c/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveMergeIntoSchemaEvolution.scala
This file was deleted.
Oops, something went wrong.
243 changes: 243 additions & 0 deletions
243
...talyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveSchemaEvolution.scala
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,243 @@ | ||
| /* | ||
| * Licensed to the Apache Software Foundation (ASF) under one or more | ||
| * contributor license agreements. See the NOTICE file distributed with | ||
| * this work for additional information regarding copyright ownership. | ||
| * The ASF licenses this file to You under the Apache License, Version 2.0 | ||
| * (the "License"); you may not use this file except in compliance with | ||
| * the License. You may obtain a copy of the License at | ||
| * | ||
| * http://www.apache.org/licenses/LICENSE-2.0 | ||
| * | ||
| * Unless required by applicable law or agreed to in writing, software | ||
| * distributed under the License is distributed on an "AS IS" BASIS, | ||
| * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| * See the License for the specific language governing permissions and | ||
| * limitations under the License. | ||
| */ | ||
|
|
||
| package org.apache.spark.sql.catalyst.analysis | ||
|
|
||
| import scala.collection.mutable.ArrayBuffer | ||
|
|
||
| import org.apache.spark.internal.Logging | ||
| import org.apache.spark.sql.catalyst.expressions.{Attribute, AttributeMap} | ||
| import org.apache.spark.sql.catalyst.plans.logical._ | ||
| import org.apache.spark.sql.catalyst.rules.Rule | ||
| import org.apache.spark.sql.catalyst.types.DataTypeUtils | ||
| import org.apache.spark.sql.catalyst.util.CaseInsensitiveMap | ||
| import org.apache.spark.sql.connector.catalog.{CatalogV2Util, TableCatalog, TableChange} | ||
| import org.apache.spark.sql.connector.catalog.CatalogV2Implicits._ | ||
| import org.apache.spark.sql.errors.{QueryCompilationErrors, QueryExecutionErrors} | ||
| import org.apache.spark.sql.execution.datasources.v2.DataSourceV2Relation | ||
| import org.apache.spark.sql.internal.SQLConf | ||
| import org.apache.spark.sql.types.{ArrayType, AtomicType, DataType, MapType, StructField, StructType} | ||
|
|
||
|
|
||
| /** | ||
| * A rule that resolves schema evolution for MERGE INTO. | ||
| * | ||
| * This rule will call the DSV2 Catalog to update the schema of the target table. | ||
| */ | ||
| object ResolveMergeIntoSchemaEvolution extends Rule[LogicalPlan] { | ||
johanl-db marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| override def apply(plan: LogicalPlan): LogicalPlan = plan resolveOperators { | ||
| // This rule should run only if all assignments are resolved, except those | ||
| // that will be satisfied by schema evolution | ||
| case m@MergeIntoTable(_, _, _, _, _, _, _) if m.evaluateSchemaEvolution => | ||
| val changes = m.changesForSchemaEvolution | ||
| if (changes.isEmpty) { | ||
| m | ||
| } else { | ||
| val finalAttrMapping = ArrayBuffer.empty[(Attribute, Attribute)] | ||
| val newTarget = m.targetTable.transform { | ||
| case r: DataSourceV2Relation => | ||
| val referencedSourceSchema = MergeIntoTable.sourceSchemaForSchemaEvolution(m) | ||
johanl-db marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| val newTarget = | ||
| ResolveSchemaEvolution.performSchemaEvolution(r, referencedSourceSchema, changes) | ||
| val oldTargetOutput = m.targetTable.output | ||
| val newTargetOutput = newTarget.output | ||
| val attributeMapping = oldTargetOutput.zip(newTargetOutput) | ||
| finalAttrMapping ++= attributeMapping | ||
| newTarget | ||
| } | ||
| val res = m.copy(targetTable = newTarget) | ||
| res.rewriteAttrs(AttributeMap(finalAttrMapping.toSeq)) | ||
| } | ||
| } | ||
| } | ||
|
|
||
| /** | ||
| * A rule that resolves schema evolution for V2 INSERT commands. | ||
| * | ||
| * This rule will call the DSV2 Catalog to update the schema of the target table. | ||
| */ | ||
| object ResolveInsertSchemaEvolution extends Rule[LogicalPlan] { | ||
|
|
||
| override def apply(plan: LogicalPlan): LogicalPlan = plan resolveOperators { | ||
johanl-db marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| case v2Write: V2WriteCommand | ||
| if v2Write.table.resolved && v2Write.query.resolved && v2Write.schemaEvolutionEnabled => | ||
| val changes = v2Write.changesForSchemaEvolution | ||
| if (changes.isEmpty) { | ||
| v2Write | ||
| } else { | ||
| EliminateSubqueryAliases(v2Write.table) match { | ||
johanl-db marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| case r: DataSourceV2Relation => | ||
| val newRelation = ResolveSchemaEvolution.performSchemaEvolution( | ||
| r, v2Write.query.schema, changes, isByName = v2Write.isByName) | ||
| val attrMapping: Seq[(Attribute, Attribute)] = | ||
| r.output.zip(newRelation.output) | ||
| v2Write.withNewTable(newRelation).rewriteAttrs(AttributeMap(attrMapping)) | ||
| case _ => v2Write | ||
| } | ||
| } | ||
| } | ||
| } | ||
|
|
||
| /** | ||
| * Shared schema evolution utilities used by both MERGE INTO and INSERT schema evolution rules. | ||
| */ | ||
| object ResolveSchemaEvolution extends Logging { | ||
|
|
||
johanl-db marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| /** | ||
| * Applies schema evolution changes to a DSV2 relation by altering the table schema | ||
| * through the catalog, then verifying all changes were applied. | ||
| */ | ||
| def performSchemaEvolution( | ||
| relation: DataSourceV2Relation, | ||
| referencedSourceSchema: StructType, | ||
| changes: Array[TableChange], | ||
| isByName: Boolean = true): DataSourceV2Relation = { | ||
| (relation.catalog, relation.identifier) match { | ||
| case (Some(c: TableCatalog), Some(i)) => | ||
| c.alterTable(i, changes: _*) | ||
| val newTable = c.loadTable(i) | ||
johanl-db marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| val newSchema = CatalogV2Util.v2ColumnsToStructType(newTable.columns()) | ||
| // Check if there are any remaining changes not applied. | ||
| val remainingChanges = | ||
| schemaChanges(newSchema, referencedSourceSchema, isByName = isByName) | ||
| if (remainingChanges.nonEmpty) { | ||
| throw QueryCompilationErrors.unsupportedTableChangesInAutoSchemaEvolutionError( | ||
| remainingChanges, i.toQualifiedNameParts(c)) | ||
| } | ||
| relation.copy(table = newTable, output = DataTypeUtils.toAttributes(newSchema)) | ||
| case _ => logWarning(s"Schema Evolution enabled but data source $relation " + | ||
| s"does not support it, skipping.") | ||
| relation | ||
| } | ||
| } | ||
|
|
||
| /** | ||
| * Computes the set of table changes needed to evolve `originalTarget` schema | ||
| * to accommodate `originalSource` schema. When `isByName` is true, fields are matched | ||
| * by name. When false, fields are matched by position. | ||
| */ | ||
| def schemaChanges( | ||
johanl-db marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| originalTarget: StructType, | ||
| originalSource: StructType, | ||
| isByName: Boolean): Array[TableChange] = | ||
| schemaChanges(originalTarget, originalSource, originalTarget, originalSource, | ||
| fieldPath = Array(), isByName = isByName) | ||
|
|
||
| private def schemaChanges( | ||
| current: DataType, | ||
johanl-db marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| newType: DataType, | ||
| originalTarget: StructType, | ||
| originalSource: StructType, | ||
| fieldPath: Array[String], | ||
johanl-db marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| isByName: Boolean): Array[TableChange] = { | ||
| (current, newType) match { | ||
| case (StructType(currentFields), StructType(newFields)) => | ||
| if (isByName) { | ||
| schemaChangesByName( | ||
| currentFields, newFields, originalTarget, originalSource, fieldPath) | ||
| } else { | ||
| schemaChangesByPosition( | ||
| currentFields, newFields, originalTarget, originalSource, fieldPath) | ||
| } | ||
|
|
||
| case (ArrayType(currentElementType, _), ArrayType(newElementType, _)) => | ||
| schemaChanges(currentElementType, newElementType, | ||
| originalTarget, originalSource, fieldPath ++ Seq("element"), isByName) | ||
|
|
||
| case (MapType(currentKeyType, currentElementType, _), | ||
johanl-db marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
johanl-db marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| MapType(updateKeyType, updateElementType, _)) => | ||
| schemaChanges(currentKeyType, updateKeyType, originalTarget, originalSource, | ||
johanl-db marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| fieldPath ++ Seq("key"), isByName) ++ | ||
| schemaChanges(currentElementType, updateElementType, | ||
| originalTarget, originalSource, fieldPath ++ Seq("value"), isByName) | ||
|
|
||
| case (currentType: AtomicType, newType: AtomicType) if currentType != newType => | ||
| Array(TableChange.updateColumnType(fieldPath, newType)) | ||
|
|
||
| case (currentType, newType) if currentType == newType => | ||
| // No change needed | ||
| Array.empty[TableChange] | ||
|
|
||
| case _ => | ||
| // Do not support change between atomic and complex types for now | ||
| throw QueryExecutionErrors.failedToMergeIncompatibleSchemasError( | ||
| originalTarget, originalSource, null) | ||
| } | ||
| } | ||
|
|
||
| /** Match fields by name: look up each target field in the source by name to collect schema | ||
johanl-db marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| * differences. Nested struct fields are also matched by name. | ||
| */ | ||
| private def schemaChangesByName( | ||
johanl-db marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| currentFields: Array[StructField], | ||
| newFields: Array[StructField], | ||
| originalTarget: StructType, | ||
| originalSource: StructType, | ||
| fieldPath: Array[String]): Array[TableChange] = { | ||
| val newFieldMap = toFieldMap(newFields) | ||
|
|
||
| // Update existing field types | ||
| val updates = currentFields.collect { | ||
johanl-db marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| case currentField: StructField if newFieldMap.contains(currentField.name) => | ||
| schemaChanges(currentField.dataType, newFieldMap(currentField.name).dataType, | ||
| originalTarget, originalSource, fieldPath ++ Seq(currentField.name), isByName = true) | ||
| }.flatten | ||
|
|
||
| // Identify the newly added fields and append to the end | ||
| val currentFieldMap = toFieldMap(currentFields) | ||
| val adds = newFields.filterNot(f => currentFieldMap.contains(f.name)) | ||
| // Make the type nullable, since existing rows in the table will have NULLs for this column. | ||
johanl-db marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| .map(f => TableChange.addColumn(fieldPath ++ Set(f.name), f.dataType.asNullable)) | ||
|
|
||
| updates ++ adds | ||
| } | ||
|
|
||
| /** | ||
| * Match fields by position: pair target and source fields in order to collect schema | ||
| * differences. Nested struct fields are also matched by position. | ||
| */ | ||
| private def schemaChangesByPosition( | ||
| currentFields: Array[StructField], | ||
| newFields: Array[StructField], | ||
| originalTarget: StructType, | ||
| originalSource: StructType, | ||
| fieldPath: Array[String]): Array[TableChange] = { | ||
| // Update existing field types by pairing fields at the same position. | ||
| val updates = currentFields.zip(newFields).flatMap { case (currentField, newField) => | ||
| schemaChanges(currentField.dataType, newField.dataType, | ||
johanl-db marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| originalTarget, originalSource, | ||
| fieldPath ++ Seq(currentField.name), isByName = false) | ||
| } | ||
|
|
||
| // Extra source fields beyond the target's field count are new additions. | ||
| val adds = newFields.drop(currentFields.length) | ||
| // Make the type nullable, since existing rows in the table will have NULLs for this column. | ||
| .map(f => TableChange.addColumn(fieldPath ++ Set(f.name), f.dataType.asNullable)) | ||
|
|
||
| updates ++ adds | ||
| } | ||
|
|
||
| def toFieldMap(fields: Array[StructField]): Map[String, StructField] = { | ||
johanl-db marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| val fieldMap = fields.map(field => field.name -> field).toMap | ||
| if (SQLConf.get.caseSensitiveAnalysis) { | ||
| fieldMap | ||
| } else { | ||
| CaseInsensitiveMap(fieldMap) | ||
| } | ||
| } | ||
| } | ||
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@johanl-db @szehon-ho, can you folks explain the relation between skipSchemaEvolution via ACCEPT_ANY_SCHEMA and automatic schema evolution via AUTOMATIC_SCHEMA_EVOLUTION? Are these two mutually exclusive? Or can they co-exist? MERGE vs INSERT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Merge with ACCEPT_ANY_SCHEMA on normal DSV2 data source breaks today as it relies on external rule to resolve the merge.
I think insert already works with ACCEPT_ANY_SCHEMA, and this would be another mode. Probably should be mutually exclusive?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discussed with @aokolnychyi this morning:
AUTOMATIC_SCHEMA_EVOLUTIONandACCEPT_ANY_SCHEMAare not exclusive:AUTOMATIC_SCHEMA_EVOLUTIONallows the rule ResolveSchemaEvolution to triggerACCEPT_ANY_SCHEMAskips some resolution steps in Spark, under the assumption that the connector will handle them. In particular:At least, that's how Spark applies these capabilities today, even though the name
ACCEPT_ANY_SCHEMAsuggests more.The connector can choose to set either depending on the resolution flow that suits.
For example, Delta today always handles schema evolution itself (doesn't set
AUTOMATIC_SCHEMA_EVOLUTION) and does resolution / schema alignment (setsACCEPT_ANY_SCHEMA)As Delta moves to DSv2, my plan is to have two phases:
AUTOMATIC_SCHEMA_EVOLUTIONandACCEPT_ANY_SCHEMA: Spark handles schema evolution, but Delta takes over to do the resolution of MERGE clauses initially, and then do schema alignment for both INSERT and MERGEAUTOMATIC_SCHEMA_EVOLUTION: once we've reconciled all behavior differences between how Delta and Spark do schema alignment today, we hand over schema alignment to Spark. This will require substantial efforts, and careful breaking changes (if at all possible) in Delta