-
Notifications
You must be signed in to change notification settings - Fork 176
feat(incremental): optimize 'insert_overwrite' strategy (#1409) #1410
base: main
Are you sure you want to change the base?
feat(incremental): optimize 'insert_overwrite' strategy (#1409) #1410
Conversation
| {{ sql_header if sql_header is not none and include_sql_header }} | ||
|
|
||
| begin | ||
| begin transaction; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We had a problem with transactions, where other jobs can conflict with it.
For example, if this transaction statement is running and another (normal) statement runs on it, the transaction statement one fails:
https://cloud.google.com/bigquery/docs/transactions#transaction_concurrency
This is different as non transaction queries can run concurrently.
At my company it's relatively common to delete things as part of GDPR, or update late arriving columns in posthooks
I'm not saying this reduction in slot time is not worth this cost of conflicting jobs, but just want to point it out as a past learning! And if there is a non transaction version of this logic, that would swerve the transaction concurrency issue
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We also had a problem where we tried a separate DELETE + INSERT without a transaction, and jobs ran in between with no data (especially when the DELETE + INSERT was catching up with the context date in airflow)
b848e74 to
16f9c69
Compare
|
Thank you for your pull request and welcome to our community. We could not parse the GitHub identity of the following contributors: axel_thevenot.
|
16f9c69 to
130d15e
Compare
|
Thank you for your pull request and welcome to our community. We could not parse the GitHub identity of the following contributors: axel_thevenot.
|
resolves dbt-labs/dbt-adapters#527
docs
"N/A"Problem
The
MERGEstatement is sub-optimized in BigQuery when it comes to only replace partitions in the'insert_overwrite'strategy forincrementalmodelsSolution
For the
insert_overwritestrategy where we are looking to replace rows at the partition-level, there is a better solution and here is why:DELETEorINSERTstatement is cheapest than aMERGEstatement.DELETEstatement in BigQuery is free at the partition-level.MERGEstatement it reduces the cost by 50.4% and the elapsed time by 35.2% (slot based and not on demand)Checklist
'insert_overwrite')