Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve performance of delete+insert incremental strategy with null equality check changes #834

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

adrianburusdbt
Copy link
Contributor

@adrianburusdbt adrianburusdbt commented Feb 21, 2025

This PR merges the changes for null equality from here: #744 into existing PR: #151

Problem

Solution

Checklist

  • I have read the contributing guide and understand what's expected of me
  • I have run this code in development and it appears to resolve the stated issue
  • This PR includes tests, or tests are not required/relevant for this PR
  • This PR has no interface changes (e.g. macros, cli, logs, json artifacts, config files, adapter interface, etc) or this PR has already received feedback and approval from Product or DX

@VanTudor
Copy link
Contributor

Code lgtm, but let's address those failing tests

Copy link
Contributor

@mikealfare mikealfare left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall this is much cleaner. Assuming tests are happy then this looks good.


delete from {{ target }}
where ({{ unique_key_str }}) in (
select distinct {{ unique_key_str }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the distinct here is unnecessary and would make it take longer since it needs to reduce dupes. I don't have proof of that, but I generally don't use distinct when using a where <x> in <y> clause.

@mikealfare
Copy link
Contributor

Two of the spark failed tests are due to an unrelated issue. The errors in the third spark run are related to the cluster not coming up in time, which we have a separate PR to fix.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla:yes The PR author has signed the CLA
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants