Skip to content

Unable to set local_ssd_count to 0 on cluster configuration #1258

@ne250040

Description

@ne250040

Describe the issue

With the UI, I updated the cluster configuration to set the Number of Local SSDs as zero, the cluster seems runs normal. However, setting this value to 0 via Assets Bundle will revert the value to Default.

Configuration

With the below configuration, I see the cluster information on the job, where the Local SSD is set to 1 instead of 0

resources:
  jobs:
    jb_ingestion_some_data:
      name: jb_ingestion_some_data
      job_clusters:
        - job_cluster_key: some-cluster
          new_cluster:
            spark_version: 13.3.x-scala2.12
            num_workers: 2
            data_security_mode: "SINGLE_USER"
            node_type_id: n2-highmem-4
            gcp_attributes:
              google_service_account: ${var.gcp_cloud_service_account}
              local_ssd_count: 0
      tasks:
        - task_key: load_some_data
          job_cluster_key: some-cluster
          notebook_task:
            notebook_path: "${var.notebook_paths}/ingestion/load_fd_some_data.py"

Steps to reproduce the behavior

  1. Use above configuration on the Assets bundle configuration
  2. Deploy this bundle to a workspace
  3. Check the cluster configuration on job details.
  4. On the Advanced Options, I see the "#Local SSDs" as 1 instead of 0

Expected Behavior

When i setup local_ssd_count as 0 on the configuration, we expect the same.

Actual Behavior

Local SSD on the cluster configuration is 1 instead of 0.

OS and CLI version

CLI: Databricks CLI v0.214.0
OS: MacOS

Is this a regression?

This is a new setup, i'm not aware of previous versions.

Debug Logs

output_logs_sample.log

Metadata

Metadata

Assignees

Labels

DABsDABs related issues

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions