Skip to content

Commit 8a2e86e

Browse files
Style updates to core directory (#8075)
2 parents d1f3364 + 838c8ba commit 8a2e86e

File tree

9 files changed

+20
-20
lines changed

9 files changed

+20
-20
lines changed

website/docs/docs/core/connect-data-platform/bigquery-setup.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -292,7 +292,7 @@ my-profile:
292292
### Dataset locations
293293

294294
The location of BigQuery datasets can be configured using the `location` configuration in a BigQuery profile.
295-
`location` may be either a multi-regional location (e.g. `EU`, `US`), or a regional location (e.g. `us-west2` ) as per [the BigQuery documentation](https://cloud.google.com/bigquery/docs/locations) describes.
295+
`location` may be either a multi-regional location (for example, `EU`, `US`), or a regional location (for example, `us-west2` ) as per [the BigQuery documentation](https://cloud.google.com/bigquery/docs/locations) describes.
296296
Example:
297297

298298
```yaml

website/docs/docs/core/connect-data-platform/clickhouse-setup.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ clickhouse-service:
3737
schema: [ default ] # ClickHouse database for dbt models
3838

3939
# optional
40-
host: [ <your-clickhouse-host> ] # Your clickhouse cluster url e.g., abc123.clickhouse.cloud. Defaults to `localhost`.
40+
host: [ <your-clickhouse-host> ] # Your clickhouse cluster url for example, abc123.clickhouse.cloud. Defaults to `localhost`.
4141
port: [ 8123 ] # Defaults to 8123, 8443, 9000, 9440 depending on the secure and driver settings
4242
user: [ default ] # User for all database operations
4343
password: [ <empty string> ] # Password for the user

website/docs/docs/core/connect-data-platform/dremio-setup.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -169,5 +169,5 @@ For descriptions of the configurations in these profiles, see [Configurations](#
169169
| `port` | Yes | `9047` | Port for Dremio Software cluster API endpoints. |
170170
| `user` | Yes | None | The username of the account to use when logging into the Dremio cluster. |
171171
| `password` | Yes, if you are not using the pat configuration. | None | The password of the account to use when logging into the Dremio cluster. |
172-
| `pat` | Yes, if you are not using the user and password configurations. | None | The personal access token to use for authenticating to Dremio. See [Personal Access Tokens](https://docs.dremio.com/software/security/personal-access-tokens/) for instructions about obtaining a token. The use of a personal access token takes precedence if values for the three configurations user, password and pat are specified. |
172+
| `pat` | Yes, if you are not using the user and password configurations. | None | The personal access token to use for authenticating to Dremio. See [Personal Access Tokens](https://docs.dremio.com/software/security/personal-access-tokens/) for instructions about obtaining a token. The use of a personal access token takes precedence if values for the three configurations user, password, and pat are specified. |
173173
| `use_ssl` | Yes | `true` | Acceptable values are `true` and `false`. If the value is set to true, ensure that full wire encryption is configured in your Dremio cluster. See [Prerequisites for Dremio Software](#prerequisites-for-dremio-software). |

website/docs/docs/core/connect-data-platform/hive-setup.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ your_profile_name:
8585
8686
</File>
8787
88-
Note: When creating workload user in CDP, make sure the user has CREATE, SELECT, ALTER, INSERT, UPDATE, DROP, INDEX, READ and WRITE permissions. If you need the user to execute GRANT statements, you should also configure the appropriate GRANT permissions for them. When using Apache Ranger, permissions for allowing GRANT are typically set using "Delegate Admin" option. For more information, see [`grants`](/reference/resource-configs/grants) and [on-run-start & on-run-end](/reference/project-configs/on-run-start-on-run-end).
88+
Note: When creating workload user in CDP, make sure the user has CREATE, SELECT, ALTER, INSERT, UPDATE, DROP, INDEX, READ, and WRITE permissions. If you need the user to execute GRANT statements, you should also configure the appropriate GRANT permissions for them. When using Apache Ranger, permissions for allowing GRANT are typically set using "Delegate Admin" option. For more information, see [`grants`](/reference/resource-configs/grants) and [on-run-start & on-run-end](/reference/project-configs/on-run-start-on-run-end).
8989

9090
### Kerberos
9191

website/docs/docs/core/connect-data-platform/impala-setup.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,7 @@ your_profile_name:
9191

9292
</File>
9393

94-
Note: When creating workload user in CDP ensure that the user has CREATE, SELECT, ALTER, INSERT, UPDATE, DROP, INDEX, READ and WRITE permissions. If the user is required to execute GRANT statements, see for instance (/reference/resource-configs/grants) or (/reference/project-configs/on-run-start-on-run-end) appropriate GRANT permissions should be configured. When using Apache Ranger, permissions for allowing GRANT are typically set using "Delegate Admin" option.
94+
Note: When creating workload user in CDP ensure that the user has CREATE, SELECT, ALTER, INSERT, UPDATE, DROP, INDEX, READ, and WRITE permissions. If the user is required to execute GRANT statements, see for instance (/reference/resource-configs/grants) or (/reference/project-configs/on-run-start-on-run-end) appropriate GRANT permissions should be configured. When using Apache Ranger, permissions for allowing GRANT are typically set using "Delegate Admin" option.
9595

9696
### Kerberos
9797

website/docs/docs/core/connect-data-platform/maxcompute-setup.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ jaffle_shop: # this needs to match the profile in your dbt_project.yml file
8888
dev:
8989
type: maxcompute
9090
project: dbt-example # Replace this with your project name
91-
schema: default # Replace this with schema name, e.g. dbt_bilbo
91+
schema: default # Replace this with schema name, for example, dbt_bilbo
9292
endpoint: http://service.cn-shanghai.maxcompute.aliyun.com/api # Replace this with your maxcompute endpoint
9393
auth_type: access_key # credential type, Optional, default is 'access_key'
9494
access_key_id: accessKeyId # AccessKeyId
@@ -106,7 +106,7 @@ jaffle_shop: # this needs to match the profile in your dbt_project.yml file
106106
dev:
107107
type: maxcompute
108108
project: dbt-example # Replace this with your project name
109-
schema: default # Replace this with schema name, e.g. dbt_bilbo
109+
schema: default # Replace this with schema name, for example, dbt_bilbo
110110
endpoint: http://service.cn-shanghai.maxcompute.aliyun.com/api # Replace this with your maxcompute endpoint
111111
auth_type: sts # credential type
112112
access_key_id: accessKeyId # AccessKeyId
@@ -125,7 +125,7 @@ jaffle_shop: # this needs to match the profile in your dbt_project.yml file
125125
dev:
126126
type: maxcompute
127127
project: dbt-example # Replace this with your project name
128-
schema: default # Replace this with schema name, e.g. dbt_bilbo
128+
schema: default # Replace this with schema name, for example, dbt_bilbo
129129
endpoint: http://service.cn-shanghai.maxcompute.aliyun.com/api # Replace this with your maxcompute endpoint
130130
auth_type: ram_role_arn # credential type
131131
access_key_id: accessKeyId # AccessKeyId
@@ -148,7 +148,7 @@ jaffle_shop: # this needs to match the profile in your dbt_project.yml file
148148
dev:
149149
type: maxcompute
150150
project: dbt-example # Replace this with your project name
151-
schema: default # Replace this with schema name, e.g. dbt_bilbo
151+
schema: default # Replace this with schema name, for example, dbt_bilbo
152152
endpoint: http://service.cn-shanghai.maxcompute.aliyun.com/api # Replace this with your maxcompute endpoint
153153
auth_type: oidc_role_arn # credential type
154154
access_key_id: accessKeyId # AccessKeyId
@@ -181,7 +181,7 @@ jaffle_shop: # this needs to match the profile in your dbt_project.yml file
181181
dev:
182182
type: maxcompute
183183
project: dbt-example # Replace this with your project name
184-
schema: default # Replace this with schema name, e.g. dbt_bilbo
184+
schema: default # Replace this with schema name, for example, dbt_bilbo
185185
endpoint: http://service.cn-shanghai.maxcompute.aliyun.com/api # Replace this with your maxcompute endpoint
186186
auth_type: ecs_ram_role # credential type
187187
role_name: roleName # `role_name` is optional. It will be retrieved automatically if not set. It is highly recommended to set it up to reduce requests.
@@ -199,7 +199,7 @@ jaffle_shop: # this needs to match the profile in your dbt_project.yml file
199199
dev:
200200
type: maxcompute
201201
project: dbt-example # Replace this with your project name
202-
schema: default # Replace this with schema name, e.g. dbt_bilbo
202+
schema: default # Replace this with schema name, for example, dbt_bilbo
203203
endpoint: http://service.cn-shanghai.maxcompute.aliyun.com/api # Replace this with your maxcompute endpoint
204204
auth_type: credentials_uri # credential type
205205
credentials_uri: http://local_or_remote_uri/ # Credentials URI
@@ -216,7 +216,7 @@ jaffle_shop: # this needs to match the profile in your dbt_project.yml file
216216
dev:
217217
type: maxcompute
218218
project: dbt-example # Replace this with your project name
219-
schema: default # Replace this with schema name, e.g. dbt_bilbo
219+
schema: default # Replace this with schema name, for example, dbt_bilbo
220220
endpoint: http://service.cn-shanghai.maxcompute.aliyun.com/api # Replace this with your maxcompute endpoint
221221
auth_type: bearer # credential type
222222
bearer_token: bearerToken # BearerToken
@@ -231,7 +231,7 @@ jaffle_shop: # this needs to match the profile in your dbt_project.yml file
231231
dev:
232232
type: maxcompute
233233
project: dbt-example # Replace this with your project name
234-
schema: default # Replace this with schema name, e.g. dbt_bilbo
234+
schema: default # Replace this with schema name, for example, dbt_bilbo
235235
endpoint: http://service.cn-shanghai.maxcompute.aliyun.com/api # Replace this with your maxcompute endpoint
236236
auth_type: chain
237237
```

website/docs/docs/core/connect-data-platform/oracle-setup.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -453,7 +453,7 @@ Oracle's Autonomous Database Serverless (ADB-S) users can run dbt-py models usin
453453

454454
### Features
455455
- User Defined Python function is run in an ADB-S spawned Python 3.12.1 runtime
456-
- Access to external Python packages available in the Python runtime. For e.g. `numpy`, `pandas`, `scikit_learn` etc
456+
- Access to external Python packages available in the Python runtime. For example, `numpy`, `pandas`, `scikit_learn` etc
457457
- Integration with Conda 24.x to create environments with custom Python packages
458458
- Access to Database session in the Python function
459459
- DataFrame read API to read `TABLES`, `VIEWS`, and ad-hoc `SELECT` queries as DataFrames

website/docs/docs/core/connect-data-platform/postgres-setup.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -99,7 +99,7 @@ If `dbt-postgres` encounters an operational error or timeout when opening a new
9999
`psycopg2-binary` is installed by default when installing `dbt-postgres`.
100100
Installing `psycopg2-binary` uses a pre-built version of `psycopg2` which may not be optimized for your particular machine.
101101
This is ideal for development and testing workflows where performance is less of a concern and speed and ease of install is more important.
102-
However, production environments will benefit from a version of `psycopg2` which is built from source for your particular operating system and architecture. In this scenario, speed and ease of install is less important as the on-going usage is the focus.
102+
However, production environments will benefit from a version of `psycopg2` which is built from source for your particular operating system, and architecture. In this scenario, speed and ease of install is less important as the on-going usage is the focus.
103103

104104

105105
To use `psycopg2`:

website/docs/docs/core/connect-data-platform/spark-setup.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ $ python -m pip install "dbt-spark[session]"
5353
dbt-spark can connect to Spark clusters by four different methods:
5454

5555
- [`odbc`](#odbc) is the preferred method when connecting to Databricks. It supports connecting to a SQL Endpoint or an all-purpose interactive cluster.
56-
- [`thrift`](#thrift) connects directly to the lead node of a cluster, either locally hosted / on premise or in the cloud (e.g. Amazon EMR).
56+
- [`thrift`](#thrift) connects directly to the lead node of a cluster, either locally hosted / on premise or in the cloud (for example, Amazon EMR).
5757
- [`http`](#http) is a more generic method for connecting to a managed service that provides an HTTP endpoint. Currently, this includes connections to a Databricks interactive cluster.
5858

5959

@@ -98,7 +98,7 @@ your_profile_name:
9898
9999
### Thrift
100100
101-
Use the `thrift` connection method if you are connecting to a Thrift server sitting in front of a Spark cluster, e.g. a cluster running locally or on Amazon EMR.
101+
Use the `thrift` connection method if you are connecting to a Thrift server sitting in front of a Spark cluster, for example, a cluster running locally or on Amazon EMR.
102102

103103
<File name='~/.dbt/profiles.yml'>
104104

@@ -115,8 +115,8 @@ your_profile_name:
115115
# optional
116116
port: [port] # default 10001
117117
user: [user]
118-
auth: [e.g. KERBEROS]
119-
kerberos_service_name: [e.g. hive]
118+
auth: [for example, KERBEROS]
119+
kerberos_service_name: [for example, hive]
120120
use_ssl: [true|false] # value of hive.server2.use.SSL, default false
121121
server_side_parameters:
122122
"spark.driver.memory": "4g"
@@ -126,7 +126,7 @@ your_profile_name:
126126

127127
### HTTP
128128

129-
Use the `http` method if your Spark provider supports generic connections over HTTP (e.g. Databricks interactive cluster).
129+
Use the `http` method if your Spark provider supports generic connections over HTTP (for example, Databricks interactive cluster).
130130

131131
<File name='~/.dbt/profiles.yml'>
132132

0 commit comments

Comments
 (0)