You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: website/docs/docs/core/connect-data-platform/bigquery-setup.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -292,7 +292,7 @@ my-profile:
292
292
### Dataset locations
293
293
294
294
The location of BigQuery datasets can be configured using the `location` configuration in a BigQuery profile.
295
-
`location`may be either a multi-regional location (e.g. `EU`, `US`), or a regional location (e.g. `us-west2` ) as per [the BigQuery documentation](https://cloud.google.com/bigquery/docs/locations) describes.
295
+
`location`may be either a multi-regional location (for example, `EU`, `US`), or a regional location (for example, `us-west2` ) as per [the BigQuery documentation](https://cloud.google.com/bigquery/docs/locations) describes.
Copy file name to clipboardExpand all lines: website/docs/docs/core/connect-data-platform/dremio-setup.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -169,5 +169,5 @@ For descriptions of the configurations in these profiles, see [Configurations](#
169
169
| `port` | Yes | `9047` | Port for Dremio Software cluster API endpoints. |
170
170
| `user` | Yes | None | The username of the account to use when logging into the Dremio cluster. |
171
171
| `password` | Yes, if you are not using the pat configuration. | None | The password of the account to use when logging into the Dremio cluster. |
172
-
| `pat` | Yes, if you are not using the user and password configurations. | None | The personal access token to use for authenticating to Dremio. See [Personal Access Tokens](https://docs.dremio.com/software/security/personal-access-tokens/) for instructions about obtaining a token. The use of a personal access token takes precedence if values for the three configurations user, password and pat are specified. |
172
+
| `pat` | Yes, if you are not using the user and password configurations. | None | The personal access token to use for authenticating to Dremio. See [Personal Access Tokens](https://docs.dremio.com/software/security/personal-access-tokens/) for instructions about obtaining a token. The use of a personal access token takes precedence if values for the three configurations user, password, and pat are specified. |
173
173
| `use_ssl` | Yes | `true` | Acceptable values are `true` and `false`. If the value is set to true, ensure that full wire encryption is configured in your Dremio cluster. See [Prerequisites for Dremio Software](#prerequisites-for-dremio-software). |
Copy file name to clipboardExpand all lines: website/docs/docs/core/connect-data-platform/hive-setup.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -85,7 +85,7 @@ your_profile_name:
85
85
86
86
</File>
87
87
88
-
Note: When creating workload user in CDP, make sure the user has CREATE, SELECT, ALTER, INSERT, UPDATE, DROP, INDEX, READ and WRITE permissions. If you need the user to execute GRANT statements, you should also configure the appropriate GRANT permissions for them. When using Apache Ranger, permissions for allowing GRANT are typically set using "Delegate Admin" option. For more information, see [`grants`](/reference/resource-configs/grants) and [on-run-start & on-run-end](/reference/project-configs/on-run-start-on-run-end).
88
+
Note: When creating workload user in CDP, make sure the user has CREATE, SELECT, ALTER, INSERT, UPDATE, DROP, INDEX, READ, and WRITE permissions. If you need the user to execute GRANT statements, you should also configure the appropriate GRANT permissions for them. When using Apache Ranger, permissions for allowing GRANT are typically set using "Delegate Admin" option. For more information, see [`grants`](/reference/resource-configs/grants) and [on-run-start & on-run-end](/reference/project-configs/on-run-start-on-run-end).
Copy file name to clipboardExpand all lines: website/docs/docs/core/connect-data-platform/impala-setup.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -91,7 +91,7 @@ your_profile_name:
91
91
92
92
</File>
93
93
94
-
Note: When creating workload user in CDP ensure that the user has CREATE, SELECT, ALTER, INSERT, UPDATE, DROP, INDEX, READ and WRITE permissions. If the user is required to execute GRANT statements, see for instance (/reference/resource-configs/grants) or (/reference/project-configs/on-run-start-on-run-end) appropriate GRANT permissions should be configured. When using Apache Ranger, permissions for allowing GRANT are typically set using "Delegate Admin" option.
94
+
Note: When creating workload user in CDP ensure that the user has CREATE, SELECT, ALTER, INSERT, UPDATE, DROP, INDEX, READ, and WRITE permissions. If the user is required to execute GRANT statements, see for instance (/reference/resource-configs/grants) or (/reference/project-configs/on-run-start-on-run-end) appropriate GRANT permissions should be configured. When using Apache Ranger, permissions for allowing GRANT are typically set using "Delegate Admin" option.
Copy file name to clipboardExpand all lines: website/docs/docs/core/connect-data-platform/maxcompute-setup.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -88,7 +88,7 @@ jaffle_shop: # this needs to match the profile in your dbt_project.yml file
88
88
dev:
89
89
type: maxcompute
90
90
project: dbt-example # Replace this with your project name
91
-
schema: default # Replace this with schema name, e.g. dbt_bilbo
91
+
schema: default # Replace this with schema name, for example, dbt_bilbo
92
92
endpoint: http://service.cn-shanghai.maxcompute.aliyun.com/api # Replace this with your maxcompute endpoint
93
93
auth_type: access_key # credential type, Optional, default is 'access_key'
94
94
access_key_id: accessKeyId # AccessKeyId
@@ -106,7 +106,7 @@ jaffle_shop: # this needs to match the profile in your dbt_project.yml file
106
106
dev:
107
107
type: maxcompute
108
108
project: dbt-example # Replace this with your project name
109
-
schema: default # Replace this with schema name, e.g. dbt_bilbo
109
+
schema: default # Replace this with schema name, for example, dbt_bilbo
110
110
endpoint: http://service.cn-shanghai.maxcompute.aliyun.com/api # Replace this with your maxcompute endpoint
111
111
auth_type: sts # credential type
112
112
access_key_id: accessKeyId # AccessKeyId
@@ -125,7 +125,7 @@ jaffle_shop: # this needs to match the profile in your dbt_project.yml file
125
125
dev:
126
126
type: maxcompute
127
127
project: dbt-example # Replace this with your project name
128
-
schema: default # Replace this with schema name, e.g. dbt_bilbo
128
+
schema: default # Replace this with schema name, for example, dbt_bilbo
129
129
endpoint: http://service.cn-shanghai.maxcompute.aliyun.com/api # Replace this with your maxcompute endpoint
130
130
auth_type: ram_role_arn # credential type
131
131
access_key_id: accessKeyId # AccessKeyId
@@ -148,7 +148,7 @@ jaffle_shop: # this needs to match the profile in your dbt_project.yml file
148
148
dev:
149
149
type: maxcompute
150
150
project: dbt-example # Replace this with your project name
151
-
schema: default # Replace this with schema name, e.g. dbt_bilbo
151
+
schema: default # Replace this with schema name, for example, dbt_bilbo
152
152
endpoint: http://service.cn-shanghai.maxcompute.aliyun.com/api # Replace this with your maxcompute endpoint
153
153
auth_type: oidc_role_arn # credential type
154
154
access_key_id: accessKeyId # AccessKeyId
@@ -181,7 +181,7 @@ jaffle_shop: # this needs to match the profile in your dbt_project.yml file
181
181
dev:
182
182
type: maxcompute
183
183
project: dbt-example # Replace this with your project name
184
-
schema: default # Replace this with schema name, e.g. dbt_bilbo
184
+
schema: default # Replace this with schema name, for example, dbt_bilbo
185
185
endpoint: http://service.cn-shanghai.maxcompute.aliyun.com/api # Replace this with your maxcompute endpoint
186
186
auth_type: ecs_ram_role # credential type
187
187
role_name: roleName # `role_name` is optional. It will be retrieved automatically if not set. It is highly recommended to set it up to reduce requests.
@@ -199,7 +199,7 @@ jaffle_shop: # this needs to match the profile in your dbt_project.yml file
199
199
dev:
200
200
type: maxcompute
201
201
project: dbt-example # Replace this with your project name
202
-
schema: default # Replace this with schema name, e.g. dbt_bilbo
202
+
schema: default # Replace this with schema name, for example, dbt_bilbo
203
203
endpoint: http://service.cn-shanghai.maxcompute.aliyun.com/api # Replace this with your maxcompute endpoint
204
204
auth_type: credentials_uri # credential type
205
205
credentials_uri: http://local_or_remote_uri/ # Credentials URI
@@ -216,7 +216,7 @@ jaffle_shop: # this needs to match the profile in your dbt_project.yml file
216
216
dev:
217
217
type: maxcompute
218
218
project: dbt-example # Replace this with your project name
219
-
schema: default # Replace this with schema name, e.g. dbt_bilbo
219
+
schema: default # Replace this with schema name, for example, dbt_bilbo
220
220
endpoint: http://service.cn-shanghai.maxcompute.aliyun.com/api # Replace this with your maxcompute endpoint
221
221
auth_type: bearer # credential type
222
222
bearer_token: bearerToken # BearerToken
@@ -231,7 +231,7 @@ jaffle_shop: # this needs to match the profile in your dbt_project.yml file
231
231
dev:
232
232
type: maxcompute
233
233
project: dbt-example # Replace this with your project name
234
-
schema: default # Replace this with schema name, e.g. dbt_bilbo
234
+
schema: default # Replace this with schema name, for example, dbt_bilbo
235
235
endpoint: http://service.cn-shanghai.maxcompute.aliyun.com/api # Replace this with your maxcompute endpoint
Copy file name to clipboardExpand all lines: website/docs/docs/core/connect-data-platform/postgres-setup.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -99,7 +99,7 @@ If `dbt-postgres` encounters an operational error or timeout when opening a new
99
99
`psycopg2-binary` is installed by default when installing `dbt-postgres`.
100
100
Installing `psycopg2-binary` uses a pre-built version of `psycopg2` which may not be optimized for your particular machine.
101
101
This is ideal for development and testing workflows where performance is less of a concern and speed and ease of install is more important.
102
-
However, production environments will benefit from a version of `psycopg2` which is built from source for your particular operating system and architecture. In this scenario, speed and ease of install is less important as the on-going usage is the focus.
102
+
However, production environments will benefit from a version of `psycopg2` which is built from source for your particular operating system, and architecture. In this scenario, speed and ease of install is less important as the on-going usage is the focus.
dbt-spark can connect to Spark clusters by four different methods:
54
54
55
55
-[`odbc`](#odbc) is the preferred method when connecting to Databricks. It supports connecting to a SQL Endpoint or an all-purpose interactive cluster.
56
-
-[`thrift`](#thrift) connects directly to the lead node of a cluster, either locally hosted / on premise or in the cloud (e.g. Amazon EMR).
56
+
-[`thrift`](#thrift) connects directly to the lead node of a cluster, either locally hosted / on premise or in the cloud (for example, Amazon EMR).
57
57
-[`http`](#http) is a more generic method for connecting to a managed service that provides an HTTP endpoint. Currently, this includes connections to a Databricks interactive cluster.
58
58
59
59
@@ -98,7 +98,7 @@ your_profile_name:
98
98
99
99
### Thrift
100
100
101
-
Use the `thrift` connection method if you are connecting to a Thrift server sitting in front of a Spark cluster, e.g. a cluster running locally or on Amazon EMR.
101
+
Use the `thrift` connection method if you are connecting to a Thrift server sitting in front of a Spark cluster, for example, a cluster running locally or on Amazon EMR.
102
102
103
103
<File name='~/.dbt/profiles.yml'>
104
104
@@ -115,8 +115,8 @@ your_profile_name:
115
115
# optional
116
116
port: [port] # default 10001
117
117
user: [user]
118
-
auth: [e.g. KERBEROS]
119
-
kerberos_service_name: [e.g. hive]
118
+
auth: [for example, KERBEROS]
119
+
kerberos_service_name: [for example, hive]
120
120
use_ssl: [true|false] # value of hive.server2.use.SSL, default false
121
121
server_side_parameters:
122
122
"spark.driver.memory": "4g"
@@ -126,7 +126,7 @@ your_profile_name:
126
126
127
127
### HTTP
128
128
129
-
Use the `http` method if your Spark provider supports generic connections over HTTP (e.g. Databricks interactive cluster).
129
+
Use the `http` method if your Spark provider supports generic connections over HTTP (for example, Databricks interactive cluster).
0 commit comments