You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+26-26Lines changed: 26 additions & 26 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -159,8 +159,9 @@ Make note of the project name and the data set name and location. The data set w
159
159
160
160
2.1 Automated load process
161
161
162
-
These instructions are written for running in a Cloud Shell environment. Ensure that your environment is configured to access the Google Cloud project you want to use:
163
-
```
162
+
These instructions are written for running in a Cloud Shell environment. Ensure that your environment is configured to access the Google Cloud project you want to use:
163
+
164
+
```shell
164
165
gcloud config set project [PROJECT_ID]
165
166
```
166
167
@@ -191,7 +192,7 @@ cd data
191
192
192
193
2.1.3 Configure automation
193
194
194
-
The automated process is configured via setting several environment variables and then executing the file <workingdirectory>/oracle-database-assessment/db_assessment/0_configure_op_env.sh.
195
+
The automated load process is configured via setting several environment variables and then executing a set of scripts in the <workingdirectory>/oracle-database-assessment/scripts/ directory.
195
196
196
197
Set these environment variables prior to starting the data load process:
197
198
@@ -231,19 +232,18 @@ export COLSEP='|'
231
232
232
233
2.1.4 Execute the load scripts
233
234
234
-
The load scripts expect to be run from the <workingdirectory>/oracle-database-assessment directory. Change to this directory and run the following commands in numeric order. Check output of each for errors before continuing to the next.
235
+
The load scripts expect to be run from the <workingdirectory>/oracle-database-assessment/scripts directory. Change to this directory and run the following commands in numeric order. Check output of each for errors before continuing to the next.
235
236
236
237
```shell
237
-
. ./db_assessment/0_configure_op_env.sh
238
-
./db_assessment/1_activate_op.sh
239
-
./db_assessment/2_load_op.sh
240
-
./db_assessment/3_run_op_etl.sh
241
-
./db_assessment/4_gen_op_report_url.sh
238
+
./scripts/1_activate_op.sh
239
+
./scripts/2_load_op.sh
240
+
./scripts/3_run_op_etl.sh
241
+
./scripts/4_gen_op_report_url.sh
242
242
```
243
243
244
244
The function of each script is as follows.
245
245
246
-
-0_configure_op_env.sh - Defines environment variables that are used in the other scripts.
246
+
-_configure_op_env.sh - Defines environment variables that are used in the other scripts. This script is executed only by the other scripts in the loading process.
247
247
- 1_activate_op.sh - Installs necessary Python support modules and activates the Python virtual environment for Optimus Prime.
248
248
- 2_load_op.sh - Loads the client data files into the base Optimus Prime tables in the requested data set.
249
249
- 3_run_op_etl.sh - Installs and runs Big Query procedures that create additional views and tables to support the Optimus Prime dashboard.
@@ -320,34 +320,34 @@ unzip <zip files>
320
320
321
321
# If you want to import one single Optimus Prime file collection (From 1 single database), please follow the below step:
322
322
323
-
optimus-prime -dataset newdatasetORexistingdataset -collectionid 080421224807 -fileslocation /<work-directory>/oracle-database-assessment-output -projectname my-awesome-gcp-project -importcomment "this is for prod"
323
+
optimus-prime -dataset newdatasetORexistingdataset -collectionid 080421224807 --files-location /<work-directory>/oracle-database-assessment-output --project-name my-awesome-gcp-project -importcomment "this is for prod"
324
324
325
-
# If you want to import various Optimus Prime file collections (From various databases) that are stored under the same directory being used for -fileslocation. Then, you can add to your command two additional flags (-fromdataframe -consolidatedataframes) and pass only "" to -collectionid. See example below:
325
+
# If you want to import various Optimus Prime file collections (From various databases) that are stored under the same directory being used for --files-location. Then, you can add to your command two additional flags (--from-dataframe -consolidatedataframes) and pass only "" to -collectionid. See example below:
# If you want to import only specific db version or sql version from Optimus Prime file collections hat are stored under the same directory being used for -fileslocation.
329
+
# If you want to import only specific db version or sql version from Optimus Prime file collections hat are stored under the same directory being used for --files-location.
-`-dataset`: is the name of the dataset in Google BigQuery. It is created if it does not exists. If it does already nothing to do then.
339
-
-`-collectionid`: is the file identification which last numbers in the filename which represents `<datetime> (mmddrrhh24miss)`.
338
+
-`--dataset`: is the name of the dataset in Google BigQuery. It is created if it does not exists. If it does already nothing to do then.
339
+
-`--collection-id`: is the file identification which last numbers in the filename which represents `<datetime> (mmddrrhh24miss)`.
340
340
- In this example of a filename `opdb__usedspacedetails__121_0.1.0_mydbhost.mycompany.com.ORCLDB.orcl1.071621111714.log` the file identification is `071621111714`.
341
-
-`-fileslocation`: The location in which the opdb\*log were saved.
342
-
-`-projectname`: The GCP project in which the data will be loaded.
343
-
-`-deletedataset`: This an optinal. In case you want to delete the whole existing dataset before importing the data.
341
+
-`--files-location`: The location in which the opdb\*log were saved.
342
+
-`--project-name`: The GCP project in which the data will be loaded.
343
+
-`--delete-dataset`: This an optinal. In case you want to delete the whole existing dataset before importing the data.
344
344
- WARNING: It will DELETE permanently ALL tables previously in the dataset. No further confirmation will be required. Use it with caution.
345
-
-`-importcomment`: This an optional. In case you want to store any comment about the load in opkeylog table. Eg: "This is for Production import"
346
-
-`-filterbysqlversion`: This an optional. In case you have files from multiple sql versions in the folder and you want to load only specific sql version files
347
-
-`-filterbydbversion`: This an optional. In case you have files from multiple db versions in the folder and you want to load only specific db version files
348
-
-`-skipvalidations`: This is optional. Default is False. if we use the flag, file validations will be skipped
345
+
-`--import-comment`: This an optional. In case you want to store any comment about the load in opkeylog table. Eg: "This is for Production import"
346
+
-`--filter-by-sql-version`: This an optional. In case you have files from multiple sql versions in the folder and you want to load only specific sql version files
347
+
-`--filter-by-db-version`: This an optional. In case you have files from multiple db versions in the folder and you want to load only specific db version files
348
+
-`--skip-validations`: This is optional. Default is False. if we use the flag, file validations will be skipped
349
349
350
-
-> NOTE: If your file has elapsed time or any other string except data, fun following script to remove it
350
+
-> NOTE: If your file has elapsed time or any other string except data, run the following script to remove it
0 commit comments