-
Couldn't load subscription status.
- Fork 1.1k
Document new behavior flags for dbt-databricks #8049
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 5 commits
216c575
e825acb
e9c2e53
999325e
d50c900
00b5b11
2bf8989
b1c7d22
ed71baf
b9398a7
195e4f7
5667f53
91a89bb
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -11,6 +11,8 @@ The following are the current [behavior change flags](/docs/reference/global-con | |
| | [`use_info_schema_for_columns`](#use-information-schema-for-columns) | 1.9.0 | TBD | | ||
| | [`use_user_folder_for_python`](#use-users-folder-for-python-model-notebooks) | 1.9.0 | TBD | | ||
| | [`use_materialization_v2`](#use-restructured-materializations) | 1.10.0 | TBD | | ||
| | [`use_managed_iceberg`](#use-managed-iceberg) | 1.11.0 | TBD | | ||
| | [`use_replace_on_for_insert_overwrite`](#use-replace-on-for-insert_overwrite-strategy) | 1.11.0 | TBD | | ||
|
|
||
| ## Use information schema for columns | ||
|
|
||
|
|
@@ -178,3 +180,11 @@ models: | |
| ``` | ||
|
|
||
| </File> | ||
|
|
||
| ## Use managed Iceberg | ||
|
|
||
| The `use_managed_iceberg` flag is `False` by default and results in a [UniForm](https://www.databricks.com/blog/delta-uniform-universal-format-lakehouse-interoperability) table when `table_format` is set to `iceberg`. When this flag is set to `True`, the table is created as a [managed Iceberg table](https://docs.databricks.com/aws/en/tables/managed). | ||
|
|
||
| ## Use `replace on` for `insert_overwrite` strategy | ||
|
|
||
| The `use_replace_on_for_insert_overwrite` flag is only relevant when using incremental models with the `insert_overwrite` strategy on SQL warehouses. The flag is `True` by default and results in using the `replace on` syntax to perform partition overwrites. When the flag is set to `False`, partition overwrites will be performed via `insert overwrite` with dynamic partition overwrite. The latter is only officially supported for cluster computes, and will truncate the entire table when used with SQL warehouses. | ||
|
||
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|
|
@@ -240,9 +240,9 @@ insert into table analytics.databricks_incremental | |||||
|
|
||||||
| ### The `insert_overwrite` strategy | ||||||
|
|
||||||
| This strategy is most effective when specified alongside a `partition_by` clause in your model config. dbt will run an [atomic `insert overwrite` statement](https://spark.apache.org/docs/3.0.0-preview/sql-ref-syntax-dml-insert-overwrite-table.html) that dynamically replaces all partitions included in your query. Be sure to re-select _all_ of the relevant data for a partition when using this incremental strategy. | ||||||
| This strategy is most effective when specified alongside a `partition_by` or `liquid_clustered_by` clause in your model config. dbt will run an [atomic `insert into .. replace on` statement](https://docs.databricks.com/aws/en/sql/language-manual/sql-ref-syntax-dml-insert-into#replace-on) that dynamically replaces all partitions/clusters included in your query. Be sure to re-select _all_ of the relevant data for a partition/cluster when using this incremental strategy. If [`use_replace_on_for_insert_overwrite`](/reference/global-configs/databricks-changes#use-replace-on-for-insert_overwrite-strategy) is set to `False` or runtime is older than 17.1, this strategy will run an [atomic `insert overwrite` statement](https://spark.apache.org/docs/3.0.0-preview/sql-ref-syntax-dml-insert-overwrite-table.html) instead. | ||||||
|
||||||
| This strategy is most effective when specified alongside a `partition_by` or `liquid_clustered_by` clause in your model config. dbt will run an [atomic `insert into .. replace on` statement](https://docs.databricks.com/aws/en/sql/language-manual/sql-ref-syntax-dml-insert-into#replace-on) that dynamically replaces all partitions/clusters included in your query. Be sure to re-select _all_ of the relevant data for a partition/cluster when using this incremental strategy. If [`use_replace_on_for_insert_overwrite`](/reference/global-configs/databricks-changes#use-replace-on-for-insert_overwrite-strategy) is set to `False` or runtime is older than 17.1, this strategy will run an [atomic `insert overwrite` statement](https://spark.apache.org/docs/3.0.0-preview/sql-ref-syntax-dml-insert-overwrite-table.html) instead. | |
| This strategy is most effective when specified alongside a `partition_by` or `liquid_clustered_by` clause in your model config. dbt will run an [atomic `INSERT INTO .. REPLACE ON` statement](https://docs.databricks.com/aws/en/sql/language-manual/sql-ref-syntax-dml-insert-into#replace-on) that dynamically replaces all partitions/clusters included in your query. Be sure to re-select _all_ of the relevant data for a partition/cluster when using this incremental strategy. If [`use_replace_on_for_insert_overwrite`](/reference/global-configs/databricks-changes#use-replace-on-for-insert_overwrite-strategy) is set to `False` or runtime is older than 17.1, this strategy will run an [atomic `insert overwrite` statement](https://spark.apache.org/docs/3.0.0-preview/sql-ref-syntax-dml-insert-overwrite-table.html) instead. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SQL syntax is intentionally lowercase to align with the rest of dbt documentation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's leave it lowercase
Uh oh!
There was an error while loading. Please reload this page.