Skip to content

feat: add --switch-config-path parameter to llm-transpile command#2256

Open
hiroyukinakazato-db wants to merge 7 commits intomainfrom
feature/switch-config-path-param
Open

feat: add --switch-config-path parameter to llm-transpile command#2256
hiroyukinakazato-db wants to merge 7 commits intomainfrom
feature/switch-config-path-param

Conversation

@hiroyukinakazato-db
Copy link
Contributor

@hiroyukinakazato-db hiroyukinakazato-db commented Jan 30, 2026

Summary

  • Add optional --switch-config-path parameter to llm-transpile CLI command
  • Parameter allows users to specify a custom Switch configuration file in the workspace
  • Path must start with /Workspace/ (validation included)
  • When not specified, Switch uses its default configuration

Closes #2255

Test plan

  • Unit tests added and passing
  • E2E test: verify --switch-config-path works with actual workspace execution
  • Verify Switch side correctly reads switch_config_path job parameter

Add support for specifying a custom Switch configuration file when
running LLM transpilation. The path must be a workspace path starting
with /Workspace/. When not specified, Switch uses its default config.

Closes #2255
@codecov
Copy link

codecov bot commented Jan 30, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 66.43%. Comparing base (1d855d0) to head (8932b1a).

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2256      +/-   ##
==========================================
+ Coverage   66.41%   66.43%   +0.02%     
==========================================
  Files          99       99              
  Lines        9094     9100       +6     
  Branches      974      977       +3     
==========================================
+ Hits         6040     6046       +6     
  Misses       2878     2878              
  Partials      176      176              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@github-actions
Copy link

github-actions bot commented Jan 30, 2026

❌ 142/143 passed, 6 flaky, 1 failed, 5 skipped, 36m12s total

❌ test_recon_sql_server_job_succeeds: databricks.sdk.errors.sdk.OperationFailed: failed to reach TERMINATED or SKIPPED, got RunLifeCycleState.INTERNAL_ERROR: Task run_reconciliation failed with message: Workload failed, see run output for details. (10m11.814s)
... (skipped 56232 bytes)
istener$SimpleForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:40)\n\tat grpc_shaded.io.grpc.PartialForwardingServerCallListener.onHalfClose(PartialForwardingServerCallListener.java:35)\n\tat grpc_shaded.io.grpc.ForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:23)\n\tat grpc_shaded.io.grpc.ForwardingServerCallListener$SimpleForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:40)\n\tat com.databricks.spark.connect.service.AuthenticationInterceptor$AuthenticatedServerCallListener.$anonfun$onHalfClose$1(AuthenticationInterceptor.scala:381)\n\tat scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)\n\tat com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:51)\n\tat com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:104)\n\tat com.databricks.spark.connect.service.RequestContext.$anonfun$runWith$3(RequestContext.scala:337)\n\tat com.databricks.spark.connect.service.RequestContext$.com$databricks$spark$connect$service$RequestContext$$withLocalProperties(RequestContext.scala:544)\n\tat com.databricks.spark.connect.service.RequestContext.$anonfun$runWith$2(RequestContext.scala:337)\n\tat com.databricks.logging.AttributionContextTracing.$anonfun$withAttributionContext$1(AttributionContextTracing.scala:49)\n\tat com.databricks.logging.AttributionContext$.$anonfun$withValue$1(AttributionContext.scala:293)\n\tat scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)\n\tat com.databricks.logging.AttributionContext$.withValue(AttributionContext.scala:289)\n\tat com.databricks.logging.AttributionContextTracing.withAttributionContext(AttributionContextTracing.scala:47)\n\tat com.databricks.logging.AttributionContextTracing.withAttributionContext$(AttributionContextTracing.scala:44)\n\tat com.databricks.spark.util.PublicDBLogging.withAttributionContext(DatabricksSparkUsageLogger.scala:29)\n\tat com.databricks.spark.util.UniverseAttributionContextWrapper.withValue(AttributionContextUtils.scala:242)\n\tat com.databricks.spark.connect.service.RequestContext.$anonfun$runWith$1(RequestContext.scala:336)\n\tat com.databricks.spark.connect.service.RequestContext.withContext(RequestContext.scala:349)\n\tat com.databricks.spark.connect.service.RequestContext.runWith(RequestContext.scala:329)\n\tat com.databricks.spark.connect.service.AuthenticationInterceptor$AuthenticatedServerCallListener.onHalfClose(AuthenticationInterceptor.scala:381)\n\tat grpc_shaded.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.halfClosed(ServerCallImpl.java:351)\n\tat grpc_shaded.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1HalfClosed.runInContext(ServerImpl.java:861)\n\tat grpc_shaded.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)\n\tat grpc_shaded.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)\n\tat org.apache.spark.util.threads.SparkThreadLocalCapturingRunnable.$anonfun$run$1(SparkThreadLocalForwardingThreadPoolExecutor.scala:165)\n\tat scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)\n\tat com.databricks.util.LexicalThreadLocal$Handle.runWith(LexicalThreadLocal.scala:63)\n\tat org.apache.spark.util.threads.SparkThreadLocalCapturingHelper.$anonfun$runWithCaptured$6(SparkThreadLocalForwardingThreadPoolExecutor.scala:119)\n\tat com.databricks.sql.transaction.tahoe.mst.MSTThreadHelper$.runWithMstTxnId(MSTThreadHelper.scala:57)\n\tat org.apache.spark.util.threads.SparkThreadLocalCapturingHelper.$anonfun$runWithCaptured$5(SparkThreadLocalForwardingThreadPoolExecutor.scala:118)\n\tat com.databricks.spark.util.IdentityClaim$.withClaim(IdentityClaim.scala:48)\n\tat org.apache.spark.util.threads.SparkThreadLocalCapturingHelper.$anonfun$runWithCaptured$4(SparkThreadLocalForwardingThreadPoolExecutor.scala:117)\n\tat com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:51)\n\tat org.apache.spark.util.threads.SparkThreadLocalCapturingHelper.runWithCaptured(SparkThreadLocalForwardingThreadPoolExecutor.scala:116)\n\tat org.apache.spark.util.threads.SparkThreadLocalCapturingHelper.runWithCaptured$(SparkThreadLocalForwardingThreadPoolExecutor.scala:93)\n\tat org.apache.spark.util.threads.SparkThreadLocalCapturingRunnable.runWithCaptured(SparkThreadLocalForwardingThreadPoolExecutor.scala:162)\n\tat org.apache.spark.util.threads.SparkThreadLocalCapturingRunnable.run(SparkThreadLocalForwardingThreadPoolExecutor.scala:165)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.lang.Thread.run(Thread.java:840)')]))
01:33 INFO [databricks.labs.pytester.fixtures.baseline] Created dummy-8OYAqvMQ: https://DATABRICKS_HOST/compute/clusters/0228-013355-6ks1n5ds
01:33 DEBUG [databricks.labs.pytester.fixtures.baseline] added cluster fixture: <databricks.sdk.service._internal.Wait object at 0x7f7da36d2110>
01:41 INFO [databricks.labs.pytester.fixtures.baseline] Created dummy_cfnlf8kjy catalog: https://DATABRICKS_HOST/#explore/data/dummy_cfnlf8kjy
01:41 DEBUG [databricks.labs.pytester.fixtures.baseline] added catalog fixture: CatalogInfo(browse_only=False, catalog_type=<CatalogType.MANAGED_CATALOG: 'MANAGED_CATALOG'>, comment=None, connection_name=None, created_at=1772242889513, created_by='3fe685a1-96cc-4fec-8cdb-6944f5c9787e', effective_predictive_optimization_flag=EffectivePredictiveOptimizationFlag(value=<EnablePredictiveOptimization.DISABLE: 'DISABLE'>, inherited_from_name='primary', inherited_from_type=None), enable_predictive_optimization=<EnablePredictiveOptimization.INHERIT: 'INHERIT'>, full_name='dummy_cfnlf8kjy', isolation_mode=<CatalogIsolationMode.OPEN: 'OPEN'>, metastore_id='8952c1e3-b265-4adf-98c3-6f755e2e1453', name='dummy_cfnlf8kjy', options=None, owner='3fe685a1-96cc-4fec-8cdb-6944f5c9787e', properties={'RemoveAfter': '2026022803'}, provider_name=None, provisioning_info=None, securable_type=<SecurableType.CATALOG: 'CATALOG'>, share_name=None, storage_location=None, storage_root=None, updated_at=1772242889513, updated_by='3fe685a1-96cc-4fec-8cdb-6944f5c9787e')
01:41 INFO [tests.integration.reconcile.conftest] Created catalog dummy_cfnlf8kjy for recon tests
01:41 INFO [databricks.labs.pytester.fixtures.baseline] Created dummy_cfnlf8kjy.dummy_sgpb24zge schema: https://DATABRICKS_HOST/#explore/data/dummy_cfnlf8kjy/dummy_sgpb24zge
01:41 DEBUG [databricks.labs.pytester.fixtures.baseline] added schema fixture: SchemaInfo(browse_only=None, catalog_name='dummy_cfnlf8kjy', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='dummy_cfnlf8kjy.dummy_sgpb24zge', metastore_id=None, name='dummy_sgpb24zge', owner=None, properties=None, schema_id=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None)
01:41 INFO [tests.integration.reconcile.conftest] Created schema dummy_sgpb24zge in catalog dummy_cfnlf8kjy for recon tests
01:41 INFO [databricks.labs.pytester.fixtures.baseline] Created dummy_sgpb24zge volume: https://DATABRICKS_HOST/#explore/data/dummy_cfnlf8kjy/dummy_sgpb24zge/dummy_sgpb24zge
01:41 DEBUG [databricks.labs.pytester.fixtures.baseline] added volume fixture: VolumeInfo(access_point=None, browse_only=None, catalog_name='dummy_cfnlf8kjy', comment=None, created_at=1772242891925, created_by='3fe685a1-96cc-4fec-8cdb-6944f5c9787e', encryption_details=None, full_name='dummy_cfnlf8kjy.dummy_sgpb24zge.dummy_sgpb24zge', metastore_id='8952c1e3-b265-4adf-98c3-6f755e2e1453', name='dummy_sgpb24zge', owner='3fe685a1-96cc-4fec-8cdb-6944f5c9787e', schema_name='dummy_sgpb24zge', storage_location='abfss://labs-CLOUD_ENV-TEST_CATALOG-container@databrickslabsstorage.dfs.core.windows.net/8952c1e3-b265-4adf-98c3-6f755e2e1453/volumes/45337f7d-2d4a-46d2-92ee-41b2d0eab6bc', updated_at=1772242891925, updated_by='3fe685a1-96cc-4fec-8cdb-6944f5c9787e', volume_id='45337f7d-2d4a-46d2-92ee-41b2d0eab6bc', volume_type=<VolumeType.MANAGED: 'MANAGED'>)
01:41 INFO [tests.integration.reconcile.conftest] Using recon job overrides: ReconcileJobConfig(existing_cluster_id='0228-013355-6ks1n5ds', tags={'lakebridge': 'reconcile_test'})
01:41 INFO [databricks.labs.pytester.fixtures.baseline] Created dummy_cfnlf8kjy.dummy_sgpb24zge.dummy_tedcvsdep schema: https://DATABRICKS_HOST/#explore/data/dummy_cfnlf8kjy/dummy_sgpb24zge/dummy_tedcvsdep
01:41 DEBUG [databricks.labs.pytester.fixtures.baseline] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='dummy_cfnlf8kjy', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=<DataSourceFormat.DELTA: 'DELTA'>, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='dummy_cfnlf8kjy.dummy_sgpb24zge.dummy_tedcvsdep', metastore_id=None, name='dummy_tedcvsdep', owner=None, pipeline_id=None, properties={'RemoveAfter': '2026022803'}, row_filter=None, schema_name='dummy_sgpb24zge', securable_kind_manifest=None, sql_path=None, storage_credential_name=None, storage_location='dbfs:/user/hive/warehouse/dummy_sgpb24zge/dummy_tedcvsdep', table_constraints=None, table_id=None, table_type=<TableType.MANAGED: 'MANAGED'>, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None)
01:41 INFO [databricks.labs.pytester.fixtures.baseline] Created dummy_cfnlf8kjy.dummy_sgpb24zge.dummy_t3jtlfvwy schema: https://DATABRICKS_HOST/#explore/data/dummy_cfnlf8kjy/dummy_sgpb24zge/dummy_t3jtlfvwy
01:41 DEBUG [databricks.labs.pytester.fixtures.baseline] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='dummy_cfnlf8kjy', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=<DataSourceFormat.DELTA: 'DELTA'>, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='dummy_cfnlf8kjy.dummy_sgpb24zge.dummy_t3jtlfvwy', metastore_id=None, name='dummy_t3jtlfvwy', owner=None, pipeline_id=None, properties={'RemoveAfter': '2026022803'}, row_filter=None, schema_name='dummy_sgpb24zge', securable_kind_manifest=None, sql_path=None, storage_credential_name=None, storage_location='dbfs:/user/hive/warehouse/dummy_sgpb24zge/dummy_t3jtlfvwy', table_constraints=None, table_id=None, table_type=<TableType.MANAGED: 'MANAGED'>, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None)
01:41 INFO [tests.integration.reconcile.conftest] Created recon tables dummy_tedcvsdep, dummy_t3jtlfvwy in schema dummy_sgpb24zge
01:41 INFO [tests.integration.reconcile.conftest] Inserted data into table dummy_tedcvsdep and got response StatementStatus(error=None, state=<StatementState.SUCCEEDED: 'SUCCEEDED'>)
01:41 INFO [tests.integration.reconcile.conftest] Inserted data into table dummy_t3jtlfvwy and got response StatementStatus(error=None, state=<StatementState.SUCCEEDED: 'SUCCEEDED'>)
01:41 INFO [tests.integration.reconcile.conftest] Setting up application context for recon tests
01:41 INFO [tests.integration.reconcile.conftest] Installing app and recon configuration into workspace
01:41 DEBUG [databricks.labs.lakebridge.install] No existing version found in workspace; assuming fresh installation.
01:41 INFO [databricks.labs.lakebridge.install] Installing Lakebridge reconcile Metadata components.
01:41 INFO [databricks.labs.lakebridge.deployment.recon] Installing reconcile components.
01:41 INFO [databricks.labs.lakebridge.deployment.recon] Deploying reconciliation metadata tables.
01:41 INFO [databricks.labs.lakebridge.deployment.table] Deploying table main in dummy_cfnlf8kjy.dummy_sgpb24zge
01:41 INFO [databricks.labs.lakebridge.deployment.table] SQL Backend used for deploying table: StatementExecutionBackend
01:41 INFO [databricks.labs.lakebridge.deployment.table] Deploying table metric in dummy_cfnlf8kjy.dummy_sgpb24zge
01:41 INFO [databricks.labs.lakebridge.deployment.table] SQL Backend used for deploying table: StatementExecutionBackend
01:41 INFO [databricks.labs.lakebridge.deployment.table] Deploying table detai in dummy_cfnlf8kjy.dummy_sgpb24zge
01:41 INFO [databricks.labs.lakebridge.deployment.table] SQL Backend used for deploying table: StatementExecutionBackend
01:41 INFO [databricks.labs.lakebridge.deployment.table] Deploying table aggregate_metric in dummy_cfnlf8kjy.dummy_sgpb24zge
01:41 INFO [databricks.labs.lakebridge.deployment.table] SQL Backend used for deploying table: StatementExecutionBackend
01:41 INFO [databricks.labs.lakebridge.deployment.table] Deploying table aggregate_detai in dummy_cfnlf8kjy.dummy_sgpb24zge
01:41 INFO [databricks.labs.lakebridge.deployment.table] SQL Backend used for deploying table: StatementExecutionBackend
01:41 INFO [databricks.labs.lakebridge.deployment.table] Deploying table aggregate_rule in dummy_cfnlf8kjy.dummy_sgpb24zge
01:41 INFO [databricks.labs.lakebridge.deployment.table] SQL Backend used for deploying table: StatementExecutionBackend
01:41 INFO [databricks.labs.lakebridge.deployment.recon] Deploying reconciliation dashboards.
01:41 INFO [databricks.labs.lakebridge.deployment.dashboard] Deploying dashboards from base folder /home/runner/work/lakebridge/lakebridge/src/databricks/labs/lakebridge/resources/reconcile/dashboards
01:41 WARNING [databricks.labs.lsql.dashboards] Parsing : No expression was parsed from ''
01:41 WARNING [databricks.labs.lsql.dashboards] Parsing : No expression was parsed from ''
01:41 WARNING [databricks.labs.lsql.dashboards] Parsing : No expression was parsed from ''
01:41 WARNING [databricks.labs.lsql.dashboards] Parsing : No expression was parsed from ''
01:41 WARNING [databricks.labs.lsql.dashboards] Parsing : No expression was parsed from ''
01:41 WARNING [databricks.labs.lsql.dashboards] Parsing : No expression was parsed from ''
01:41 WARNING [databricks.labs.lsql.dashboards] Parsing : No expression was parsed from ''
01:41 WARNING [databricks.labs.lsql.dashboards] Parsing : No expression was parsed from ''
01:41 WARNING [databricks.labs.lsql.dashboards] Parsing : No expression was parsed from ''
01:41 WARNING [databricks.labs.lsql.dashboards] Parsing : No expression was parsed from ''
01:41 WARNING [databricks.labs.lsql.dashboards] Parsing : No expression was parsed from ''
01:41 INFO [databricks.labs.lakebridge.deployment.dashboard] Dashboard deployed with URL: https://DATABRICKS_HOST/sql/dashboardsv3/01f11446a822139bb600db630d2949e9
01:41 WARNING [databricks.labs.lsql.dashboards] Parsing : No expression was parsed from ''
01:41 WARNING [databricks.labs.lsql.dashboards] Parsing : No expression was parsed from ''
01:41 WARNING [databricks.labs.lsql.dashboards] Parsing : No expression was parsed from ''
01:41 WARNING [databricks.labs.lsql.dashboards] Parsing : No expression was parsed from ''
01:41 WARNING [databricks.labs.lsql.dashboards] Parsing : No expression was parsed from ''
01:41 INFO [databricks.labs.lakebridge.deployment.dashboard] Dashboard deployed with URL: https://DATABRICKS_HOST/sql/dashboardsv3/01f11446a92f19c28ed4423c2b038b53
01:41 INFO [databricks.labs.lakebridge.deployment.recon] Deploying reconciliation jobs.
01:41 INFO [databricks.labs.lakebridge.deployment.job] Deploying reconciliation job.
01:41 DEBUG [databricks.labs.lakebridge.deployment.job] Applying deployment overrides: ReconcileJobConfig(existing_cluster_id='0228-013355-6ks1n5ds', tags={'lakebridge': 'reconcile_test'})
01:41 WARNING [databricks.labs.lakebridge.deployment.job] Parsed package name databricks_labs_lakebridge does not match product name, using TEST_SCHEMA.
01:41 DEBUG [databricks.labs.lakebridge.deployment.job] Reconciliation job task cluster: existing: 0228-013355-6ks1n5ds or name: None
01:41 INFO [databricks.labs.lakebridge.deployment.job] Creating new job configuration for job `Reconciliation Runner`
01:41 INFO [databricks.labs.lakebridge.deployment.job] Reconciliation job deployed with job_id=820117963787639
01:41 INFO [databricks.labs.lakebridge.deployment.job] Job URL: https://DATABRICKS_HOST#job/820117963787639
01:41 INFO [databricks.labs.lakebridge.deployment.recon] Installation of reconcile components completed successfully.
01:41 INFO [tests.integration.reconcile.conftest] Application context setup complete for recon tests
01:41 DEBUG [databricks.labs.lakebridge.reconcile.runner] Reconcile job id found in the install state.
01:41 INFO [databricks.labs.lakebridge.reconcile.runner] Triggering the reconcile job with job_id: `820117963787639`
01:41 INFO [databricks.labs.lakebridge.reconcile.runner] 'RECONCILE' job started. Please check the job_url `https://DATABRICKS_HOST/jobs/820117963787639/runs/675650261158459` for the current status.
01:44 INFO [tests.integration.reconcile.test_recon_e2e] Reconcile job run had 1 tasks
01:44 INFO [tests.integration.reconcile.test_recon_e2e] Task run_reconciliation has error message: ReconciliationException: ('Reconciliation **row** with id: 6693a472d57341acb59e8fd85908b3ea failed with exceptions for 1 table(s). Please check recon metrics for details.', ReconcileOutput(recon_id='6693a472d57341acb59e8fd85908b3ea', results=[ReconcileTableOutput(target_table_name='dummy_cfnlf8kjy.dummy_sgpb24zge.dummy_t3jtlfvwy', source_table_name='labs_CLOUD_ENV_TEST_CATALOG_remorph.dbo.diamonds_big_column', status=StatusOutput(row=None, column=None, schema=None, aggregate=None), exception_message='Runtime exception occurred while fetching data using SELECT LOWER(CONVERT(VARCHAR(64), HASHBYTES(SHA2_256, CONVERT(VARCHAR(MAX),COALESCE(TRIM(CAST([carat] AS VARCHAR(MAX))), _null_recon_) + COALESCE(TRIM(CAST([clarity] AS VARCHAR(MAX))), _null_recon_) + COALESCE(TRIM(CAST([color] AS VARCHAR(MAX))), _null_recon_) + COALESCE(TRIM(CAST([cut] AS VARCHAR(MAX))), _null_recon_) + COALESCE(TRIM(CAST([description] AS VARCHAR(MAX))), _null_recon_) + COALESCE(CONVERT(DATE, [mined_at], 101), 1900-01-01))), 2)) AS hash_value_recon, [carat] AS [carat], [clarity] AS [clarity], [color] AS [color], [cut] AS [cut], [description] AS [description], [mined_at] AS [mined_at] FROM labs_CLOUD_ENV_TEST_CATALOG_remorph.dbo.[diamonds_big_column] : (com.microsoft.sqlserver.jdbc.SQLServerException) The data types varchar(max) and date are incompatible in the add operator.\n\nJVM stacktrace:\ncom.microsoft.sqlserver.jdbc.SQLServerException\n\tat com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:265)\n\tat com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1676)\n\tat com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:620)\n\tat com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement$PrepStmtExecCmd.doExecute(SQLServerPreparedStatement.java:540)\n\tat com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7627)\n\tat com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:3916)\n\tat com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:268)\n\tat com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:242)\n\tat com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeQuery(SQLServerPreparedStatement.java:459)\n\tat org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.$anonfun$getQueryOutputSchema$3(JDBCRDD.scala:83)\n\tat scala.util.Using$.resource(Using.scala:269)\n\tat org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.$anonfun$getQueryOutputSchema$1(JDBCRDD.scala:81)\n\tat scala.util.Using$.resource(Using.scala:269)\n\tat org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.getQueryOutputSchema(JDBCRDD.scala:79)\n\tat org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.$anonfun$resolveTable$1(JDBCRDD.scala:72)\n\tat org.apache.spark.sql.execution.datasources.jdbc.JdbcUtilsEdge$.withSafeSQLQueryCheck(JdbcUtilsEdge.scala:286)\n\tat org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:72)\n\tat org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation$.getSchema(JDBCRelation.scala:253)\n\tat org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:44)\n\tat org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:421)\n\tat org.apache.spark.sql.catalyst.analysis.ResolveDataSource.org$apache$spark$sql$catalyst$analysis$ResolveDataSource$$loadV1BatchSource(ResolveDataSource.scala:234)\n\tat org.apache.spark.sql.catalyst.analysis.ResolveDataSource$$anonfun$apply$1.$anonfun$applyOrElse$2(ResolveDataSource.scala:97)\n\tat scala.Option.getOrElse(Option.scala:189)\n\tat org.apache.spark.sql.catalyst.analysis.ResolveDataSource$$anonfun$apply$1.applyOrElse(ResolveDataSource.scala:97)\n\tat org.apache.spark.sql.catalyst.analysis.ResolveDataSource$$anonfun$apply$1.applyOrElse(ResolveDataSource.scala:58)\n\tat org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsUpWithPruning$3(AnalysisHelper.scala:141)\n\tat org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:85)\n\tat org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsUpWithPruning$1(AnalysisHelper.scala:141)\n\tat org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:418)\n\tat org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUpWithPruning(AnalysisHelper.scala:137)\n\tat org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUpWithPruning$(AnalysisHelper.scala:133)\n\tat org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsUpWithPruning(LogicalPlan.scala:42)\n\tat org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUp(AnalysisHelper.scala:114)\n\tat org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUp$(AnalysisHelper.scala:113)\n\tat org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsUp(LogicalPlan.scala:42)\n\tat org.apache.spark.sql.catalyst.analysis.ResolveDataSource.apply(ResolveDataSource.scala:58)\n\tat org.apache.spark.sql.catalyst.analysis.ResolveDataSource.apply(ResolveDataSource.scala:56)\n\tat org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$16(RuleExecutor.scala:480)\n\tat org.apache.spark.sql.catalyst.rules.RecoverableRuleExecutionHelper.processRule(RuleExecutor.scala:629)\n\tat org.apache.spark.sql.catalyst.rules.RecoverableRuleExecutionHelper.processRule$(RuleExecutor.scala:613)\n\tat org.apache.spark.sql.catalyst.rules.RuleExecutor.processRule(RuleExecutor.scala:131)\n\tat org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$15(RuleExecutor.scala:480)\n\tat com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)\n\tat org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$14(RuleExecutor.scala:479)\n\tat scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)\n\tat scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)\n\tat scala.collection.immutable.List.foldLeft(List.scala:91)\n\tat org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$13(RuleExecutor.scala:475)\n\tat scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)\n\tat com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)\n\tat org.apache.spark.sql.catalyst.rules.RuleExecutor.executeBatch$1(RuleExecutor.scala:452)\n\tat org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$22(RuleExecutor.scala:585)\n\tat org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$22$adapted(RuleExecutor.scala:585)\n\tat scala.collection.immutable.List.foreach(List.scala:431)\n\tat org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:585)\n\tat com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)\n\tat org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:349)\n\tat org.apache.spark.sql.catalyst.analysis.Analyzer.executeSameContext(Analyzer.scala:546)\n\tat org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$execute$1(Analyzer.scala:539)\n\tat org.apache.spark.sql.catalyst.analysis.AnalysisContext$.withNewAnalysisContext(Analyzer.scala:430)\n\tat org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:539)\n\tat org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:449)\n\tat org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$executeAndTrack$1(RuleExecutor.scala:341)\n\tat org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:233)\n\tat org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:341)\n\tat org.apache.spark.sql.catalyst.analysis.resolver.HybridAnalyzer.resolveInFixedPoint(HybridAnalyzer.scala:252)\n\tat org.apache.spark.sql.catalyst.analysis.resolver.HybridAnalyzer.$anonfun$apply$1(HybridAnalyzer.scala:96)\n\tat org.apache.spark.sql.catalyst.analysis.resolver.HybridAnalyzer.withTrackedAnalyzerBridgeState(HybridAnalyzer.scala:131)\n\tat org.apache.spark.sql.catalyst.analysis.resolver.HybridAnalyzer.apply(HybridAnalyzer.scala:87)\n\tat org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:510)\n\tat org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:425)\n\tat org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:510)\n\tat org.apache.spark.sql.execution.QueryExecution.$anonfun$lazyAnalyzed$3(QueryExecution.scala:479)\n\tat com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)\n\tat org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:591)\n\tat org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$6(QueryExecution.scala:880)\n\tat org.apache.spark.sql.execution.SQLExecution$.withExecutionPhase(SQLExecution.scala:155)\n\tat org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$5(QueryExecution.scala:880)\n\tat org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1519)\n\tat org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:873)\n\tat com.databricks.util.LexicalThreadLocal$Handle.runWith(LexicalThreadLocal.scala:63)\n\tat org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:869)\n\tat org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1504)\n\tat org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:869)\n\tat org.apache.spark.sql.execution.QueryExecution.$anonfun$lazyAnalyzed$2(QueryExecution.scala:466)\n\tat com.databricks.sql.util.MemoryTrackerHelper.withMemoryTracking(MemoryTrackerHelper.scala:80)\n\tat org.apache.spark.sql.execution.QueryExecution.$anonfun$lazyAnalyzed$1(QueryExecution.scala:465)\n\tat scala.util.Try$.apply(Try.scala:213)\n\tat org.apache.spark.util.Utils$.doTryWithCallerStacktrace(Utils.scala:1684)\n\tat org.apache.spark.util.Utils$.getTryWithCallerStacktrace(Utils.scala:1745)\n\tat org.apache.spark.util.LazyTry.get(LazyTry.scala:58)\n\tat org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:517)\n\tat org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:439)\n\tat org.apache.spark.sql.Dataset$.$anonfun$ofRows$1(Dataset.scala:108)\n\tat org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1504)\n\tat org.apache.spark.sql.SparkSession.$anonfun$withActiveAndFrameProfiler$1(SparkSession.scala:1511)\n\tat com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)\n\tat org.apache.spark.sql.SparkSession.withActiveAndFrameProfiler(SparkSession.scala:1511)\n\tat org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:106)\n\tat org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:265)\n\tat org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:224)\n\tat org.apache.spark.sql.connect.planner.SparkConnectPlanner.transformReadRel(SparkConnectPlanner.scala:1843)\n\tat org.apache.spark.sql.connect.planner.SparkConnectPlanner.$anonfun$transformRelation$1(SparkConnectPlanner.scala:191)\n\tat org.apache.spark.sql.connect.service.SessionHolder.$anonfun$usePlanCache$8(SessionHolder.scala:619)\n\tat org.apache.spark.sql.connect.service.SessionHolder.measureSubtreeRelationNodes(SessionHolder.scala:635)\n\tat org.apache.spark.sql.connect.service.SessionHolder.$anonfun$usePlanCache$6(SessionHolder.scala:618)\n\tat scala.Option.getOrElse(Option.scala:189)\n\tat org.apache.spark.sql.connect.service.SessionHolder.usePlanCache(SessionHolder.scala:616)\n\tat org.apache.spark.sql.connect.planner.SparkConnectPlanner.transformRelation(SparkConnectPlanner.scala:186)\n\tat org.apache.spark.sql.connect.service.SparkConnectAnalyzeHandler.transformRelation$1(SparkConnectAnalyzeHandler.scala:121)\n\tat org.apache.spark.sql.connect.service.SparkConnectAnalyzeHandler.process(SparkConnectAnalyzeHandler.scala:132)\n\tat org.apache.spark.sql.connect.service.SparkConnectAnalyzeHandler.$anonfun$handle$3(SparkConnectAnalyzeHandler.scala:106)\n\tat org.apache.spark.sql.connect.service.SparkConnectAnalyzeHandler.$anonfun$handle$3$adapted(SparkConnectAnalyzeHandler.scala:66)\n\tat org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$2(SessionHolder.scala:464)\n\tat org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1504)\n\tat org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$1(SessionHolder.scala:464)\n\tat org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:97)\n\tat org.apache.spark.sql.artifact.ArtifactManager.$anonfun$withResources$1(ArtifactManager.scala:90)\n\tat org.apache.spark.util.Utils$.withContextClassLoader(Utils.scala:241)\n\tat org.apache.spark.sql.artifact.ArtifactManager.withResources(ArtifactManager.scala:89)\n\tat org.apache.spark.sql.connect.service.SessionHolder.withSession(SessionHolder.scala:463)\n\tat org.apache.spark.sql.connect.service.SparkConnectAnalyzeHandler.$anonfun$handle$1(SparkConnectAnalyzeHandler.scala:66)\n\tat org.apache.spark.sql.connect.service.SparkConnectAnalyzeHandler.$anonfun$handle$1$adapted(SparkConnectAnalyzeHandler.scala:51)\n\tat com.databricks.spark.connect.logging.rpc.SparkConnectRpcMetricsCollectorUtils$.collectMetrics(SparkConnectRpcMetricsCollector.scala:259)\n\tat org.apache.spark.sql.connect.service.SparkConnectAnalyzeHandler.handle(SparkConnectAnalyzeHandler.scala:50)\n\tat org.apache.spark.sql.connect.service.SparkConnectService.analyzePlan(SparkConnectService.scala:109)\n\tat org.apache.spark.connect.proto.SparkConnectServiceGrpc$MethodHandlers.invoke(SparkConnectServiceGrpc.java:801)\n\tat grpc_shaded.io.grpc.stub.ServerCalls$UnaryServerCallHandler$UnaryServerCallListener.onHalfClose(ServerCalls.java:182)\n\tat grpc_shaded.io.grpc.PartialForwardingServerCallListener.onHalfClose(PartialForwardingServerCallListener.java:35)\n\tat grpc_shaded.io.grpc.ForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:23)\n\tat grpc_shaded.io.grpc.ForwardingServerCallListener$SimpleForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:40)\n\tat grpc_shaded.io.grpc.PartialForwardingServerCallListener.onHalfClose(PartialForwardingServerCallListener.java:35)\n\tat grpc_shaded.io.grpc.ForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:23)\n\tat grpc_shaded.io.grpc.ForwardingServerCallListener$SimpleForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:40)\n\tat com.databricks.spark.connect.service.AuthenticationInterceptor$AuthenticatedServerCallListener.$anonfun$onHalfClose$1(AuthenticationInterceptor.scala:381)\n\tat scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)\n\tat com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:51)\n\tat com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:104)\n\tat com.databricks.spark.connect.service.RequestContext.$anonfun$runWith$3(RequestContext.scala:337)\n\tat com.databricks.spark.connect.service.RequestContext$.com$databricks$spark$connect$service$RequestContext$$withLocalProperties(RequestContext.scala:544)\n\tat com.databricks.spark.connect.service.RequestContext.$anonfun$runWith$2(RequestContext.scala:337)\n\tat com.databricks.logging.AttributionContextTracing.$anonfun$withAttributionContext$1(AttributionContextTracing.scala:49)\n\tat com.databricks.logging.AttributionContext$.$anonfun$withValue$1(AttributionContext.scala:293)\n\tat scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)\n\tat com.databricks.logging.AttributionContext$.withValue(AttributionContext.scala:289)\n\tat com.databricks.logging.AttributionContextTracing.withAttributionContext(AttributionContextTracing.scala:47)\n\tat com.databricks.logging.AttributionContextTracing.withAttributionContext$(AttributionContextTracing.scala:44)\n\tat com.databricks.spark.util.PublicDBLogging.withAttributionContext(DatabricksSparkUsageLogger.scala:29)\n\tat com.databricks.spark.util.UniverseAttributionContextWrapper.withValue(AttributionContextUtils.scala:242)\n\tat com.databricks.spark.connect.service.RequestContext.$anonfun$runWith$1(RequestContext.scala:336)\n\tat com.databricks.spark.connect.service.RequestContext.withContext(RequestContext.scala:349)\n\tat com.databricks.spark.connect.service.RequestContext.runWith(RequestContext.scala:329)\n\tat com.databricks.spark.connect.service.AuthenticationInterceptor$AuthenticatedServerCallListener.onHalfClose(AuthenticationInterceptor.scala:381)\n\tat grpc_shaded.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.halfClosed(ServerCallImpl.java:351)\n\tat grpc_shaded.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1HalfClosed.runInContext(ServerImpl.java:861)\n\tat grpc_shaded.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)\n\tat grpc_shaded.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)\n\tat org.apache.spark.util.threads.SparkThreadLocalCapturingRunnable.$anonfun$run$1(SparkThreadLocalForwardingThreadPoolExecutor.scala:165)\n\tat scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)\n\tat com.databricks.util.LexicalThreadLocal$Handle.runWith(LexicalThreadLocal.scala:63)\n\tat org.apache.spark.util.threads.SparkThreadLocalCapturingHelper.$anonfun$runWithCaptured$6(SparkThreadLocalForwardingThreadPoolExecutor.scala:119)\n\tat com.databricks.sql.transaction.tahoe.mst.MSTThreadHelper$.runWithMstTxnId(MSTThreadHelper.scala:57)\n\tat org.apache.spark.util.threads.SparkThreadLocalCapturingHelper.$anonfun$runWithCaptured$5(SparkThreadLocalForwardingThreadPoolExecutor.scala:118)\n\tat com.databricks.spark.util.IdentityClaim$.withClaim(IdentityClaim.scala:48)\n\tat org.apache.spark.util.threads.SparkThreadLocalCapturingHelper.$anonfun$runWithCaptured$4(SparkThreadLocalForwardingThreadPoolExecutor.scala:117)\n\tat com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:51)\n\tat org.apache.spark.util.threads.SparkThreadLocalCapturingHelper.runWithCaptured(SparkThreadLocalForwardingThreadPoolExecutor.scala:116)\n\tat org.apache.spark.util.threads.SparkThreadLocalCapturingHelper.runWithCaptured$(SparkThreadLocalForwardingThreadPoolExecutor.scala:93)\n\tat org.apache.spark.util.threads.SparkThreadLocalCapturingRunnable.runWithCaptured(SparkThreadLocalForwardingThreadPoolExecutor.scala:162)\n\tat org.apache.spark.util.threads.SparkThreadLocalCapturingRunnable.run(SparkThreadLocalForwardingThreadPoolExecutor.scala:165)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.lang.Thread.run(Thread.java:840)')]))
01:44 INFO [tests.integration.reconcile.test_recon_e2e] Task run_reconciliation has error trace:
---------------------------------------------------------------------------
ReconciliationException                   Traceback (most recent call last)
File ~/.ipykernel/2180/command--1-1105669916:18
     15 entry = [ep for ep in metadata.distribution("databricks_labs_lakebridge").entry_points if ep.name == "reconcile"]
     16 if entry:
     17   # Load and execute the entrypoint, assumes no parameters
---> 18   entry[0].load()()
     19 else:
     20   import importlib

File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.12/site-packages/databricks/labs/lakebridge/reconcile/execute.py:61, in main(*argv)
     58 if operation_name == AGG_RECONCILE_OPERATION_NAME:
     59     return _trigger_reconcile_aggregates(w, table_recon, reconcile_config)
---> 61 return _trigger_recon(w, table_recon, reconcile_config)

File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.12/site-packages/databricks/labs/lakebridge/reconcile/execute.py:69, in _trigger_recon(w, table_recon, reconcile_config)
     64 def _trigger_recon(
     65     w: WorkspaceClient,
     66     table_recon: TableRecon,
     67     reconcile_config: ReconcileConfig,
     68 ):
---> 69     recon_output = TriggerReconService.trigger_recon(
     70         ws=w,
     71         spark=DatabricksSession.builder.getOrCreate(),
     72         table_recon=table_recon,
     73         reconcile_config=reconcile_config,
     74     )
     75     logger.info(f"Output: {recon_output}")

File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.12/site-packages/databricks/labs/lakebridge/reconcile/trigger_recon_service.py:55, in TriggerReconService.trigger_recon(ws, spark, table_recon, reconcile_config, local_test_run)
     52     for table_conf in table_recon.tables:
     53         TriggerReconService.recon_one(reconciler, recon_capture, reconcile_config, table_conf)
---> 55     return TriggerReconService.verify_successful_reconciliation(
     56         generate_final_reconcile_output(
     57             recon_id=recon_capture.recon_id,
     58             spark=spark,
     59             metadata_config=reconcile_config.metadata_config,
     60             local_test_run=local_test_run,
     61         ),
     62         reconcile_config.report_type,
     63     )
     64 finally:
     65     try:

File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.12/site-packages/databricks/labs/lakebridge/reconcile/trigger_recon_service.py:251, in TriggerReconService.verify_successful_reconciliation(reconcile_output, report_type)
    245 logger.info(
    246     f"Reconciliation **{report_type}** with id: {reconcile_output.recon_id} ran for total {total_count} source tables and their targets."
    247     f" {success_count} tables succeeded, {exc_count} tables failed with exceptions and {mismatched_count} tables mismatched."
    248 )
    250 if exceptions:
--> 251     raise ReconciliationException(
    252         f"Reconciliation **{report_type}** with id: {reconcile_output.recon_id} failed with exceptions for {exc_count} table(s). Please check recon metrics for details.",
    253         reconcile_output=reconcile_output,
    254     )
    256 if mismatched:
    257     logger.error(
    258         f"Reconciliation **{report_type}** with id: {reconcile_output.recon_id} found mismatches in {mismatched_count} table(s). Please check recon metrics for details."
    259     )

ReconciliationException: ('Reconciliation **row** with id: 6693a472d57341acb59e8fd85908b3ea failed with exceptions for 1 table(s). Please check recon metrics for details.', ReconcileOutput(recon_id='6693a472d57341acb59e8fd85908b3ea', results=[ReconcileTableOutput(target_table_name='dummy_cfnlf8kjy.dummy_sgpb24zge.dummy_t3jtlfvwy', source_table_name='labs_CLOUD_ENV_TEST_CATALOG_remorph.dbo.diamonds_big_column', status=StatusOutput(row=None, column=None, schema=None, aggregate=None), exception_message='Runtime exception occurred while fetching data using SELECT LOWER(CONVERT(VARCHAR(64), HASHBYTES(SHA2_256, CONVERT(VARCHAR(MAX),COALESCE(TRIM(CAST([carat] AS VARCHAR(MAX))), _null_recon_) + COALESCE(TRIM(CAST([clarity] AS VARCHAR(MAX))), _null_recon_) + COALESCE(TRIM(CAST([color] AS VARCHAR(MAX))), _null_recon_) + COALESCE(TRIM(CAST([cut] AS VARCHAR(MAX))), _null_recon_) + COALESCE(TRIM(CAST([description] AS VARCHAR(MAX))), _null_recon_) + COALESCE(CONVERT(DATE, [mined_at], 101), 1900-01-01))), 2)) AS hash_value_recon, [carat] AS [carat], [clarity] AS [clarity], [color] AS [color], [cut] AS [cut], [description] AS [description], [mined_at] AS [mined_at] FROM labs_CLOUD_ENV_TEST_CATALOG_remorph.dbo.[diamonds_big_column] : (com.microsoft.sqlserver.jdbc.SQLServerException) The data types varchar(max) and date are incompatible in the add operator.\n\nJVM stacktrace:\ncom.microsoft.sqlserver.jdbc.SQLServerException\n\tat com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:265)\n\tat com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1676)\n\tat com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:620)\n\tat com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement$PrepStmtExecCmd.doExecute(SQLServerPreparedStatement.java:540)\n\tat com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7627)\n\tat com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:3916)\n\tat com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:268)\n\tat com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:242)\n\tat com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeQuery(SQLServerPreparedStatement.java:459)\n\tat org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.$anonfun$getQueryOutputSchema$3(JDBCRDD.scala:83)\n\tat scala.util.Using$.resource(Using.scala:269)\n\tat org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.$anonfun$getQueryOutputSchema$1(JDBCRDD.scala:81)\n\tat scala.util.Using$.resource(Using.scala:269)\n\tat org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.getQueryOutputSchema(JDBCRDD.scala:79)\n\tat org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.$anonfun$resolveTable$1(JDBCRDD.scala:72)\n\tat org.apache.spark.sql.execution.datasources.jdbc.JdbcUtilsEdge$.withSafeSQLQueryCheck(JdbcUtilsEdge.scala:286)\n\tat org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:72)\n\tat org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation$.getSchema(JDBCRelation.scala:253)\n\tat org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:44)\n\tat org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:421)\n\tat org.apache.spark.sql.catalyst.analysis.ResolveDataSource.org$apache$spark$sql$catalyst$analysis$ResolveDataSource$$loadV1BatchSource(ResolveDataSource.scala:234)\n\tat org.apache.spark.sql.catalyst.analysis.ResolveDataSource$$anonfun$apply$1.$anonfun$applyOrElse$2(ResolveDataSource.scala:97)\n\tat scala.Option.getOrElse(Option.scala:189)\n\tat org.apache.spark.sql.catalyst.analysis.ResolveDataSource$$anonfun$apply$1.applyOrElse(ResolveDataSource.scala:97)\n\tat org.apache.spark.sql.catalyst.analysis.ResolveDataSource$$anonfun$apply$1.applyOrElse(ResolveDataSource.scala:58)\n\tat org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsUpWithPruning$3(AnalysisHelper.scala:141)\n\tat org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:85)\n\tat org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsUpWithPruning$1(AnalysisHelper.scala:141)\n\tat org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:418)\n\tat org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUpWithPruning(AnalysisHelper.scala:137)\n\tat org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUpWithPruning$(AnalysisHelper.scala:133)\n\tat org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsUpWithPruning(LogicalPlan.scala:42)\n\tat org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUp(AnalysisHelper.scala:114)\n\tat org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUp$(AnalysisHelper.scala:113)\n\tat org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsUp(LogicalPlan.scala:42)\n\tat org.apache.spark.sql.catalyst.analysis.ResolveDataSource.apply(ResolveDataSource.scala:58)\n\tat org.apache.spark.sql.catalyst.analysis.ResolveDataSource.apply(ResolveDataSource.scala:56)\n\tat org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$16(RuleExecutor.scala:480)\n\tat org.apache.spark.sql.catalyst.rules.RecoverableRuleExecutionHelper.processRule(RuleExecutor.scala:629)\n\tat org.apache.spark.sql.catalyst.rules.RecoverableRuleExecutionHelper.processRule$(RuleExecutor.scala:613)\n\tat org.apache.spark.sql.catalyst.rules.RuleExecutor.processRule(RuleExecutor.scala:131)\n\tat org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$15(RuleExecutor.scala:480)\n\tat com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)\n\tat org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$14(RuleExecutor.scala:479)\n\tat scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)\n\tat scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)\n\tat scala.collection.immutable.List.foldLeft(List.scala:91)\n\tat org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$13(RuleExecutor.scala:475)\n\tat scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)\n\tat com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)\n\tat org.apache.spark.sql.catalyst.rules.RuleExecutor.executeBatch$1(RuleExecutor.scala:452)\n\tat org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$22(RuleExecutor.scala:585)\n\tat org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$22$adapted(RuleExecutor.scala:585)\n\tat scala.collection.immutable.List.foreach(List.scala:431)\n\tat org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:585)\n\tat com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)\n\tat org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:349)\n\tat org.apache.spark.sql.catalyst.analysis.Analyzer.executeSameContext(Analyzer.scala:546)\n\tat org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$execute$1(Analyzer.scala:539)\n\tat org.apache.spark.sql.catalyst.analysis.AnalysisContext$.withNewAnalysisContext(Analyzer.scala:430)\n\tat org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:539)\n\tat org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:449)\n\tat org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$executeAndTrack$1(RuleExecutor.scala:341)\n\tat org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:233)\n\tat org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:341)\n\tat org.apache.spark.sql.catalyst.analysis.resolver.HybridAnalyzer.resolveInFixedPoint(HybridAnalyzer.scala:252)\n\tat org.apache.spark.sql.catalyst.analysis.resolver.HybridAnalyzer.$anonfun$apply$1(HybridAnalyzer.scala:96)\n\tat org.apache.spark.sql.catalyst.analysis.resolver.HybridAnalyzer.withTrackedAnalyzerBridgeState(HybridAnalyzer.scala:131)\n\tat org.apache.spark.sql.catalyst.analysis.resolver.HybridAnalyzer.apply(HybridAnalyzer.scala:87)\n\tat org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:510)\n\tat org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:425)\n\tat org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:510)\n\tat org.apache.spark.sql.execution.QueryExecution.$anonfun$lazyAnalyzed$3(QueryExecution.scala:479)\n\tat com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)\n\tat org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:591)\n\tat org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$6(QueryExecution.scala:880)\n\tat org.apache.spark.sql.execution.SQLExecution$.withExecutionPhase(SQLExecution.scala:155)\n\tat org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$5(QueryExecution.scala:880)\n\tat org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1519)\n\tat org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:873)\n\tat com.databricks.util.LexicalThreadLocal$Handle.runWith(LexicalThreadLocal.scala:63)\n\tat org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:869)\n\tat org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1504)\n\tat org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:869)\n\tat org.apache.spark.sql.execution.QueryExecution.$anonfun$lazyAnalyzed$2(QueryExecution.scala:466)\n\tat com.databricks.sql.util.MemoryTrackerHelper.withMemoryTracking(MemoryTrackerHelper.scala:80)\n\tat org.apache.spark.sql.execution.QueryExecution.$anonfun$lazyAnalyzed$1(QueryExecution.scala:465)\n\tat scala.util.Try$.apply(Try.scala:213)\n\tat org.apache.spark.util.Utils$.doTryWithCallerStacktrace(Utils.scala:1684)\n\tat org.apache.spark.util.Utils$.getTryWithCallerStacktrace(Utils.scala:1745)\n\tat org.apache.spark.util.LazyTry.get(LazyTry.scala:58)\n\tat org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:517)\n\tat org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:439)\n\tat org.apache.spark.sql.Dataset$.$anonfun$ofRows$1(Dataset.scala:108)\n\tat org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1504)\n\tat org.apache.spark.sql.SparkSession.$anonfun$withActiveAndFrameProfiler$1(SparkSession.scala:1511)\n\tat com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)\n\tat org.apache.spark.sql.SparkSession.withActiveAndFrameProfiler(SparkSession.scala:1511)\n\tat org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:106)\n\tat org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:265)\n\tat org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:224)\n\tat org.apache.spark.sql.connect.planner.SparkConnectPlanner.transformReadRel(SparkConnectPlanner.scala:1843)\n\tat org.apache.spark.sql.connect.planner.SparkConnectPlanner.$anonfun$transformRelation$1(SparkConnectPlanner.scala:191)\n\tat org.apache.spark.sql.connect.service.SessionHolder.$anonfun$usePlanCache$8(SessionHolder.scala:619)\n\tat org.apache.spark.sql.connect.service.SessionHolder.measureSubtreeRelationNodes(SessionHolder.scala:635)\n\tat org.apache.spark.sql.connect.service.SessionHolder.$anonfun$usePlanCache$6(SessionHolder.scala:618)\n\tat scala.Option.getOrElse(Option.scala:189)\n\tat org.apache.spark.sql.connect.service.SessionHolder.usePlanCache(SessionHolder.scala:616)\n\tat org.apache.spark.sql.connect.planner.SparkConnectPlanner.transformRelation(SparkConnectPlanner.scala:186)\n\tat org.apache.spark.sql.connect.service.SparkConnectAnalyzeHandler.transformRelation$1(SparkConnectAnalyzeHandler.scala:121)\n\tat org.apache.spark.sql.connect.service.SparkConnectAnalyzeHandler.process(SparkConnectAnalyzeHandler.scala:132)\n\tat org.apache.spark.sql.connect.service.SparkConnectAnalyzeHandler.$anonfun$handle$3(SparkConnectAnalyzeHandler.scala:106)\n\tat org.apache.spark.sql.connect.service.SparkConnectAnalyzeHandler.$anonfun$handle$3$adapted(SparkConnectAnalyzeHandler.scala:66)\n\tat org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$2(SessionHolder.scala:464)\n\tat org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1504)\n\tat org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$1(SessionHolder.scala:464)\n\tat org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:97)\n\tat org.apache.spark.sql.artifact.ArtifactManager.$anonfun$withResources$1(ArtifactManager.scala:90)\n\tat org.apache.spark.util.Utils$.withContextClassLoader(Utils.scala:241)\n\tat org.apache.spark.sql.artifact.ArtifactManager.withResources(ArtifactManager.scala:89)\n\tat org.apache.spark.sql.connect.service.SessionHolder.withSession(SessionHolder.scala:463)\n\tat org.apache.spark.sql.connect.service.SparkConnectAnalyzeHandler.$anonfun$handle$1(SparkConnectAnalyzeHandler.scala:66)\n\tat org.apache.spark.sql.connect.service.SparkConnectAnalyzeHandler.$anonfun$handle$1$adapted(SparkConnectAnalyzeHandler.scala:51)\n\tat com.databricks.spark.connect.logging.rpc.SparkConnectRpcMetricsCollectorUtils$.collectMetrics(SparkConnectRpcMetricsCollector.scala:259)\n\tat org.apache.spark.sql.connect.service.SparkConnectAnalyzeHandler.handle(SparkConnectAnalyzeHandler.scala:50)\n\tat org.apache.spark.sql.connect.service.SparkConnectService.analyzePlan(SparkConnectService.scala:109)\n\tat org.apache.spark.connect.proto.SparkConnectServiceGrpc$MethodHandlers.invoke(SparkConnectServiceGrpc.java:801)\n\tat grpc_shaded.io.grpc.stub.ServerCalls$UnaryServerCallHandler$UnaryServerCallListener.onHalfClose(ServerCalls.java:182)\n\tat grpc_shaded.io.grpc.PartialForwardingServerCallListener.onHalfClose(PartialForwardingServerCallListener.java:35)\n\tat grpc_shaded.io.grpc.ForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:23)\n\tat grpc_shaded.io.grpc.ForwardingServerCallListener$SimpleForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:40)\n\tat grpc_shaded.io.grpc.PartialForwardingServerCallListener.onHalfClose(PartialForwardingServerCallListener.java:35)\n\tat grpc_shaded.io.grpc.ForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:23)\n\tat grpc_shaded.io.grpc.ForwardingServerCallListener$SimpleForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:40)\n\tat com.databricks.spark.connect.service.AuthenticationInterceptor$AuthenticatedServerCallListener.$anonfun$onHalfClose$1(AuthenticationInterceptor.scala:381)\n\tat scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)\n\tat com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:51)\n\tat com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:104)\n\tat com.databricks.spark.connect.service.RequestContext.$anonfun$runWith$3(RequestContext.scala:337)\n\tat com.databricks.spark.connect.service.RequestContext$.com$databricks$spark$connect$service$RequestContext$$withLocalProperties(RequestContext.scala:544)\n\tat com.databricks.spark.connect.service.RequestContext.$anonfun$runWith$2(RequestContext.scala:337)\n\tat com.databricks.logging.AttributionContextTracing.$anonfun$withAttributionContext$1(AttributionContextTracing.scala:49)\n\tat com.databricks.logging.AttributionContext$.$anonfun$withValue$1(AttributionContext.scala:293)\n\tat scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)\n\tat com.databricks.logging.AttributionContext$.withValue(AttributionContext.scala:289)\n\tat com.databricks.logging.AttributionContextTracing.withAttributionContext(AttributionContextTracing.scala:47)\n\tat com.databricks.logging.AttributionContextTracing.withAttributionContext$(AttributionContextTracing.scala:44)\n\tat com.databricks.spark.util.PublicDBLogging.withAttributionContext(DatabricksSparkUsageLogger.scala:29)\n\tat com.databricks.spark.util.UniverseAttributionContextWrapper.withValue(AttributionContextUtils.scala:242)\n\tat com.databricks.spark.connect.service.RequestContext.$anonfun$runWith$1(RequestContext.scala:336)\n\tat com.databricks.spark.connect.service.RequestContext.withContext(RequestContext.scala:349)\n\tat com.databricks.spark.connect.service.RequestContext.runWith(RequestContext.scala:329)\n\tat com.databricks.spark.connect.service.AuthenticationInterceptor$AuthenticatedServerCallListener.onHalfClose(AuthenticationInterceptor.scala:381)\n\tat grpc_shaded.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.halfClosed(ServerCallImpl.java:351)\n\tat grpc_shaded.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1HalfClosed.runInContext(ServerImpl.java:861)\n\tat grpc_shaded.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)\n\tat grpc_shaded.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)\n\tat org.apache.spark.util.threads.SparkThreadLocalCapturingRunnable.$anonfun$run$1(SparkThreadLocalForwardingThreadPoolExecutor.scala:165)\n\tat scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)\n\tat com.databricks.util.LexicalThreadLocal$Handle.runWith(LexicalThreadLocal.scala:63)\n\tat org.apache.spark.util.threads.SparkThreadLocalCapturingHelper.$anonfun$runWithCaptured$6(SparkThreadLocalForwardingThreadPoolExecutor.scala:119)\n\tat com.databricks.sql.transaction.tahoe.mst.MSTThreadHelper$.runWithMstTxnId(MSTThreadHelper.scala:57)\n\tat org.apache.spark.util.threads.SparkThreadLocalCapturingHelper.$anonfun$runWithCaptured$5(SparkThreadLocalForwardingThreadPoolExecutor.scala:118)\n\tat com.databricks.spark.util.IdentityClaim$.withClaim(IdentityClaim.scala:48)\n\tat org.apache.spark.util.threads.SparkThreadLocalCapturingHelper.$anonfun$runWithCaptured$4(SparkThreadLocalForwardingThreadPoolExecutor.scala:117)\n\tat com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:51)\n\tat org.apache.spark.util.threads.SparkThreadLocalCapturingHelper.runWithCaptured(SparkThreadLocalForwardingThreadPoolExecutor.scala:116)\n\tat org.apache.spark.util.threads.SparkThreadLocalCapturingHelper.runWithCaptured$(SparkThreadLocalForwardingThreadPoolExecutor.scala:93)\n\tat org.apache.spark.util.threads.SparkThreadLocalCapturingRunnable.runWithCaptured(SparkThreadLocalForwardingThreadPoolExecutor.scala:162)\n\tat org.apache.spark.util.threads.SparkThreadLocalCapturingRunnable.run(SparkThreadLocalForwardingThreadPoolExecutor.scala:165)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.lang.Thread.run(Thread.java:840)')]))
01:44 DEBUG [databricks.labs.pytester.fixtures.baseline] clearing 2 table fixtures
01:44 DEBUG [databricks.labs.pytester.fixtures.baseline] removing table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='dummy_cfnlf8kjy', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=<DataSourceFormat.DELTA: 'DELTA'>, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='dummy_cfnlf8kjy.dummy_sgpb24zge.dummy_tedcvsdep', metastore_id=None, name='dummy_tedcvsdep', owner=None, pipeline_id=None, properties={'RemoveAfter': '2026022803'}, row_filter=None, schema_name='dummy_sgpb24zge', securable_kind_manifest=None, sql_path=None, storage_credential_name=None, storage_location='dbfs:/user/hive/warehouse/dummy_sgpb24zge/dummy_tedcvsdep', table_constraints=None, table_id=None, table_type=<TableType.MANAGED: 'MANAGED'>, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None)
01:44 DEBUG [databricks.labs.pytester.fixtures.baseline] removing table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='dummy_cfnlf8kjy', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=<DataSourceFormat.DELTA: 'DELTA'>, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='dummy_cfnlf8kjy.dummy_sgpb24zge.dummy_t3jtlfvwy', metastore_id=None, name='dummy_t3jtlfvwy', owner=None, pipeline_id=None, properties={'RemoveAfter': '2026022803'}, row_filter=None, schema_name='dummy_sgpb24zge', securable_kind_manifest=None, sql_path=None, storage_credential_name=None, storage_location='dbfs:/user/hive/warehouse/dummy_sgpb24zge/dummy_t3jtlfvwy', table_constraints=None, table_id=None, table_type=<TableType.MANAGED: 'MANAGED'>, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None)
01:44 DEBUG [databricks.labs.pytester.fixtures.baseline] clearing 1 volume fixtures
01:44 DEBUG [databricks.labs.pytester.fixtures.baseline] removing volume fixture: VolumeInfo(access_point=None, browse_only=None, catalog_name='dummy_cfnlf8kjy', comment=None, created_at=1772242891925, created_by='3fe685a1-96cc-4fec-8cdb-6944f5c9787e', encryption_details=None, full_name='dummy_cfnlf8kjy.dummy_sgpb24zge.dummy_sgpb24zge', metastore_id='8952c1e3-b265-4adf-98c3-6f755e2e1453', name='dummy_sgpb24zge', owner='3fe685a1-96cc-4fec-8cdb-6944f5c9787e', schema_name='dummy_sgpb24zge', storage_location='abfss://labs-CLOUD_ENV-TEST_CATALOG-container@databrickslabsstorage.dfs.core.windows.net/8952c1e3-b265-4adf-98c3-6f755e2e1453/volumes/45337f7d-2d4a-46d2-92ee-41b2d0eab6bc', updated_at=1772242891925, updated_by='3fe685a1-96cc-4fec-8cdb-6944f5c9787e', volume_id='45337f7d-2d4a-46d2-92ee-41b2d0eab6bc', volume_type=<VolumeType.MANAGED: 'MANAGED'>)
01:44 DEBUG [databricks.labs.pytester.fixtures.baseline] clearing 1 schema fixtures
01:44 DEBUG [databricks.labs.pytester.fixtures.baseline] removing schema fixture: SchemaInfo(browse_only=None, catalog_name='dummy_cfnlf8kjy', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='dummy_cfnlf8kjy.dummy_sgpb24zge', metastore_id=None, name='dummy_sgpb24zge', owner=None, properties=None, schema_id=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None)
01:44 DEBUG [databricks.labs.pytester.fixtures.baseline] clearing 1 catalog fixtures
01:44 DEBUG [databricks.labs.pytester.fixtures.baseline] removing catalog fixture: CatalogInfo(browse_only=False, catalog_type=<CatalogType.MANAGED_CATALOG: 'MANAGED_CATALOG'>, comment=None, connection_name=None, created_at=1772242889513, created_by='3fe685a1-96cc-4fec-8cdb-6944f5c9787e', effective_predictive_optimization_flag=EffectivePredictiveOptimizationFlag(value=<EnablePredictiveOptimization.DISABLE: 'DISABLE'>, inherited_from_name='primary', inherited_from_type=None), enable_predictive_optimization=<EnablePredictiveOptimization.INHERIT: 'INHERIT'>, full_name='dummy_cfnlf8kjy', isolation_mode=<CatalogIsolationMode.OPEN: 'OPEN'>, metastore_id='8952c1e3-b265-4adf-98c3-6f755e2e1453', name='dummy_cfnlf8kjy', options=None, owner='3fe685a1-96cc-4fec-8cdb-6944f5c9787e', properties={'RemoveAfter': '2026022803'}, provider_name=None, provisioning_info=None, securable_type=<SecurableType.CATALOG: 'CATALOG'>, share_name=None, storage_location=None, storage_root=None, updated_at=1772242889513, updated_by='3fe685a1-96cc-4fec-8cdb-6944f5c9787e')
01:44 DEBUG [databricks.labs.pytester.fixtures.baseline] clearing 1 cluster fixtures
01:44 DEBUG [databricks.labs.pytester.fixtures.baseline] removing cluster fixture: <databricks.sdk.service._internal.Wait object at 0x7f7da36d2110>
[gw3] linux -- Python 3.10.19 /home/runner/work/lakebridge/lakebridge/.venv/bin/python

Flaky tests:

  • 🤪 test_installs_and_runs_pypi_bladebridge (27.136s)
  • 🤪 test_transpiles_informatica_to_sparksql (18.199s)
  • 🤪 test_transpile_teradata_sql_non_interactive[True] (20.432s)
  • 🤪 test_transpile_teradata_sql (21.385s)
  • 🤪 test_transpile_teradata_sql_non_interactive[False] (5.874s)
  • 🤪 test_transpiles_informatica_to_sparksql_non_interactive[False] (13.609s)

Running from acceptance #3959

@hiroyukinakazato-db
Copy link
Contributor Author

E2E Test Evidence

Test 1: Without --switch-config-path (default behavior)

Command:

databricks labs lakebridge llm-transpile \
  --input-source /tmp/switch_e2e_test \
  --output-ws-folder /Workspace/Users/hiroyuki.nakazato@databricks.com/switch-io/output/test-no-config \
  --source-dialect mssql \
  --catalog-name hinak_catalog_aws_apne1 \
  --schema-name switch \
  --volume switch_volume \
  --foundation-model databricks-claude-sonnet-4-5 \
  --accept-terms true \
  --profile e2-demo-tokyo

Output:

14:36:13 INFO [d.l.l.transpiler.switch_runner] Uploading /tmp/switch_e2e_test to /Volumes/hinak_catalog_aws_apne1/switch/switch_volume/input-20260130053613-x3za...
14:36:13 INFO [d.l.l.transpiler.switch_runner] Upload complete: /Volumes/hinak_catalog_aws_apne1/switch/switch_volume/input-20260130053613-x3za
14:36:13 INFO [d.l.l.transpiler.switch_runner] Triggering Switch job with job_id: 225510371018306
14:36:13 INFO [d.l.l.transpiler.switch_runner] Switch LLM transpilation job started: https://e2-demo-tokyo.cloud.databricks.com/jobs/225510371018306/runs/556928325013890

Job Parameters (switch_config_path not included):

databricks jobs get-run 556928325013890 --profile e2-demo-tokyo --output json | jq '.job_parameters'
[
  {"name": "source_tech", "value": "mssql"},
  {"name": "input_dir", "value": "/Volumes/hinak_catalog_aws_apne1/switch/switch_volume/input-20260130053613-x3za"},
  {"name": "output_dir", "value": "/Workspace/Users/hiroyuki.nakazato@databricks.com/switch-io/output/test-no-config"},
  {"name": "foundation_model", "value": "databricks-claude-sonnet-4-5"},
  {"name": "catalog", "value": "hinak_catalog_aws_apne1"},
  {"name": "schema", "value": "switch"}
]

Test 2: With --switch-config-path parameter

Command:

databricks labs lakebridge llm-transpile \
  --input-source /tmp/switch_e2e_test \
  --output-ws-folder /Workspace/Users/hiroyuki.nakazato@databricks.com/switch-io/output/test-with-config \
  --source-dialect mssql \
  --catalog-name hinak_catalog_aws_apne1 \
  --schema-name switch \
  --volume switch_volume \
  --foundation-model databricks-claude-sonnet-4-5 \
  --switch-config-path /Workspace/Users/hiroyuki.nakazato@databricks.com/switch_config_test.yml \
  --accept-terms true \
  --profile e2-demo-tokyo

Output:

14:30:39 INFO [d.l.l.transpiler.switch_runner] Uploading /tmp/switch_e2e_test to /Volumes/hinak_catalog_aws_apne1/switch/switch_volume/input-20260130053039-nggi...
14:30:39 INFO [d.l.l.transpiler.switch_runner] Upload complete: /Volumes/hinak_catalog_aws_apne1/switch/switch_volume/input-20260130053039-nggi
14:30:39 INFO [d.l.l.transpiler.switch_runner] Triggering Switch job with job_id: 225510371018306
14:30:40 INFO [d.l.l.transpiler.switch_runner] Switch LLM transpilation job started: https://e2-demo-tokyo.cloud.databricks.com/jobs/225510371018306/runs/242707127988892

Job Parameters (switch_config_path included):

databricks jobs get-run 242707127988892 --profile e2-demo-tokyo --output json | jq '.job_parameters'
[
  {"name": "source_tech", "value": "mssql"},
  {"name": "input_dir", "value": "/Volumes/hinak_catalog_aws_apne1/switch/switch_volume/input-20260130053039-nggi"},
  {"name": "output_dir", "value": "/Workspace/Users/hiroyuki.nakazato@databricks.com/switch-io/output/test-with-config"},
  {"name": "foundation_model", "value": "databricks-claude-sonnet-4-5"},
  {"name": "catalog", "value": "hinak_catalog_aws_apne1"},
  {"name": "schema", "value": "switch"},
  {"name": "switch_config_path", "value": "/Workspace/Users/hiroyuki.nakazato@databricks.com/switch_config_test.yml"}
]

Test 3: Validation error for invalid path

Command:

databricks labs lakebridge llm-transpile \
  --input-source /tmp/switch_e2e_test \
  --output-ws-folder /Workspace/Users/hiroyuki.nakazato@databricks.com/switch-io/output/test \
  --source-dialect mssql \
  --catalog-name hinak_catalog_aws_apne1 \
  --schema-name switch \
  --volume switch_volume \
  --foundation-model databricks-claude-sonnet-4-5 \
  --switch-config-path /Users/invalid/path.yaml \
  --accept-terms true \
  --profile e2-demo-tokyo

Output:

14:31:56 ERROR [d.l.lakebridge.llm-transpile] ValueError: Invalid value for '--switch-config-path': path must start with /Workspace/. Got: '/Users/invalid/path.yaml'

@hiroyukinakazato-db hiroyukinakazato-db marked this pull request as ready for review January 30, 2026 05:39
@hiroyukinakazato-db hiroyukinakazato-db requested a review from a team as a code owner January 30, 2026 05:39
Copy link
Collaborator

@gueniai gueniai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need details to the docs as well

@hiroyukinakazato-db
Copy link
Contributor Author

Documentation updated. @gueniai @asnare @sundarshankar89 Ready for review!

Copy link
Collaborator

@gueniai gueniai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request switch

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FEATURE]: Allow specifying custom Switch config file for llm-transpile command

2 participants