Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Spark] Drop Type widening feature: read Parquet footers to collect files to rewrite #3155

Merged

Conversation

johanl-db
Copy link
Collaborator

What changes were proposed in this pull request?

The initial approach to identify files that contain a type that differs from the table schema and that must be rewritten before dropping the type widening table feature is convoluted and turns out to be more brittle than intended.

This change switches instead to directly reading the file schema from the Parquet footer and rewriting all files that have a mismatching type.

Additional Context

Files are identified using their default row commit version (a part of the row tracking feature) and matched against type changes previously applied to the table and recorded in the table metadata: any file written before the latest type change should use a different type and must be rewritten.

This requires multiple pieces of information to be accurately tracked:

  • Default row commit versions must be correctly assigned to all files. E.p. files that are copied over without modification must never be assigned a new default row commit version. On the other hand, default row commit versions are preserved across CLONE but these versions don't match anything in the new cloned table.
  • Type change history must be reliably recorded and preserved across schema changes, e.g. column mapping.

Any bug will likely lead to files not being correctly rewritten before removing the table feature, potentially leaving the table in an unreadable state.

How was this patch tested?

Tests added in previous PR to cover CLONE and RESTORE: #3053
Tests added and updated in this PR to cover rewriting files with different column types when removing the table feature.

@johanl-db johanl-db self-assigned this May 24, 2024
@johanl-db johanl-db force-pushed the type-widening-drop-feautre-read-footers branch from 8fe1968 to 29e7fd0 Compare May 24, 2024 12:09
@@ -323,8 +324,9 @@ case class TypeWideningPreDowngradeCommand(table: DeltaTableV2)
reorgTableSpec = DeltaReorgTableSpec(DeltaReorgTableMode.REWRITE_TYPE_WIDENING, None)
)(Nil)

reorg.run(table.spark)
numFilesToRewrite
val rows = reorg.run(table.spark)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this always guaranteed to return at least one row?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@@ -309,8 +309,9 @@ case class TypeWideningPreDowngradeCommand(table: DeltaTableV2)
* @return Return the number of files rewritten.
*/
private def rewriteFilesIfNeeded(): Long = {
val numFilesToRewrite = TypeWidening.numFilesRequiringRewrite(table.initialSnapshot)
if (numFilesToRewrite == 0L) return 0L
if (!TypeWideningMetadata.containsTypeWideningMetadata(table.initialSnapshot.schema)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it guaranteed that there are no files to rewrite if the metadata is not contained?
I guess if there is no metadata but there are files the table is not fully readable anyways?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is part of the feature spec: https://github.com/delta-io/delta/blob/master/protocol_rfcs/type-widening.md?plain=1#L125
The metadata must be preserved unless all files have been rewritten since the latest type change.

This makes dropping the feature a lot more efficient in case:

  • There was no type change applied to the table: no need to go and check all parquet footers
  • The DROP FEATURE must be re-run after waiting for the retention period to pass, again no need to read the footers.
    Note: this was already the case before since numFilesRequiringRewrite relied on the presence of type widening metadata.

@johanl-db johanl-db requested a review from sabir-akhadov May 31, 2024 16:00
val footers = DeltaFileOperations.readParquetFootersInParallel(
configuration,
fileStatuses.toList,
ignoreCorruptFiles)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this param will allow it to ignore corrupt files? Should we instead never ignore corrupt files? It could potentially be dangerous because a corrupt file might recover later on and become unreadable.
Any modification we do on tables are potentially unsafe if we ignore corrupt files.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, I updated it to always fail on corrupted files + added a test

@johanl-db johanl-db requested a review from sabir-akhadov June 3, 2024 13:04
Copy link
Contributor

@sabir-akhadov sabir-akhadov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, thanks

@vkorukanti vkorukanti merged commit cd95799 into delta-io:master Jun 5, 2024
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants