-
Notifications
You must be signed in to change notification settings - Fork 145
Closed
Labels
DABsDABs related issuesDABs related issues
Description
Describe the issue
With the UI, I updated the cluster configuration to set the Number of Local SSDs as zero, the cluster seems runs normal. However, setting this value to 0 via Assets Bundle will revert the value to Default.
Configuration
With the below configuration, I see the cluster information on the job, where the Local SSD is set to 1 instead of 0
resources:
jobs:
jb_ingestion_some_data:
name: jb_ingestion_some_data
job_clusters:
- job_cluster_key: some-cluster
new_cluster:
spark_version: 13.3.x-scala2.12
num_workers: 2
data_security_mode: "SINGLE_USER"
node_type_id: n2-highmem-4
gcp_attributes:
google_service_account: ${var.gcp_cloud_service_account}
local_ssd_count: 0
tasks:
- task_key: load_some_data
job_cluster_key: some-cluster
notebook_task:
notebook_path: "${var.notebook_paths}/ingestion/load_fd_some_data.py"
Steps to reproduce the behavior
- Use above configuration on the Assets bundle configuration
- Deploy this bundle to a workspace
- Check the cluster configuration on job details.
- On the Advanced Options, I see the "#Local SSDs" as 1 instead of 0
Expected Behavior
When i setup local_ssd_count as 0 on the configuration, we expect the same.
Actual Behavior
Local SSD on the cluster configuration is 1 instead of 0.
OS and CLI version
CLI: Databricks CLI v0.214.0
OS: MacOS
Is this a regression?
This is a new setup, i'm not aware of previous versions.
Debug Logs
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
DABsDABs related issuesDABs related issues