Skip to content

Conversation

@zacharyblasczyk
Copy link
Member

@zacharyblasczyk zacharyblasczyk commented Feb 12, 2025

Summary by CodeRabbit

  • New Features

    • Introduced an example configuration file to simplify test setup.
    • Added new Terraform examples for managing control plane environments and resource filters.
  • Improvements

    • Updated the default API endpoint to https://app.ctrlplane.dev.
    • Enhanced error messaging, logging, and auto-generation of resource identifiers.
  • Documentation

    • Revamped the README and extended guides on environment configuration and resource filters.
  • Chores

    • Upgraded dependencies and streamlined workflow processes.

@coderabbitai
Copy link

coderabbitai bot commented Feb 12, 2025

Walkthrough

This pull request introduces a wide range of changes across environment configuration, build processes, client libraries, documentation, Terraform provider resources, and testing. A new .env.example file has been added with essential test variables, and string formatting in workflow files has been standardized. The client library now supports enhanced deployment and job management with new types and methods. The Terraform provider received extensive updates, including new resource and data source schemas, error handling improvements, and utility functions. Additionally, integration tests, documentation, example configurations, and dependency updates were incorporated.

Changes

File(s) Summary
.env.example, .gitignore, .github/workflows/test.yml, README.md Added new environment configuration file, updated ignore patterns, standardized delimiters in workflows, and added .env instructions in README.
GNUmakefile, go.mod Introduced new make targets (generate-client, update, etc.) and updated Go version and dependencies.
client/client.gen.go, client/client.go, client/openapi.generate.sh (deleted), client/openapi.v1.yaml (deleted) Renamed job status constants, introduced new types and methods for deployment management, updated go:generate directive to use a remote OpenAPI JSON.
docs/** Added and updated documentation for data sources (environment, resource_filter) and resources (environment, system) along with base_url changes.
examples/resources/ctrlplane_environment/main.tf, examples/resources/ctrlplane_resource_filter/main.tf Introduced new Terraform configurations and outputs for environments and resource filters, demonstrating various filter criteria.
internal/integration/**, testing/acctest/provider.go, testing/testutils/config.go Added extensive integration tests, test suites, and helper functions for verifying provider and resource behavior.
internal/provider/** Overhauled provider code with new resource types (EnvironmentResource, ResourceFilterResource), data source implementations, schema definitions, improved error handling, logging, retry mechanisms, slug management, and added utility functions.
terraform-provider-scaffolding-framework, terraform-registry-manifest.json Minor updates: removal of a subproject commit reference and JSON formatting adjustments.

Sequence Diagram(s)

sequenceDiagram
    participant C as Client Library
    participant API as API Server
    participant P as Response Parser
    C->>API: CreateDeployment (with request body)
    API-->>C: HTTP response with deployment details
    C->>P: ParseCreateDeploymentResponse
    P-->>C: Parsed deployment response
Loading
sequenceDiagram
    participant U as User/Terraform
    participant TF as Terraform Engine
    participant PR as Provider
    participant API as API Client
    U->>TF: Run "terraform apply"
    TF->>PR: Invoke Environment Resource Create
    PR->>API: Send API request to create environment
    API-->>PR: Return environment details
    PR->>TF: Respond with state configuration
Loading

Poem

Hopping through code with joy so spry,
I found new fields and targets on high,
Environments and filters all in a row,
Tests and docs in a joyful flow,
I'm a rabbit celebrating each change with a happy "Yip!"
🐰✨

Warning

There were issues while running some tools. Please review the errors and either fix the tool’s configuration or disable the tool if it’s a critical failure.

🔧 golangci-lint (1.62.2)

Error: can't load config: the Go language version (go1.23) used to build golangci-lint is lower than the targeted Go version (1.24.1)
Failed executing command with error: can't load config: the Go language version (go1.23) used to build golangci-lint is lower than the targeted Go version (1.24.1)

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

package client

//go:generate go run github.com/oapi-codegen/oapi-codegen/v2/cmd/oapi-codegen -config openapi.client.yaml openapi.v1.yaml
//go:generate go run github.com/oapi-codegen/oapi-codegen/v2/cmd/oapi-codegen -config openapi.client.yaml https://raw.githubusercontent.com/ctrlplanedev/ctrlplane/refs/heads/main/openapi.v1.json
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the right way to generate the API spec.

$${TEST:-./internal/provider/...} $${TESTARGS}

test-acceptance:
testacc-quiet:
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Useful for local testing because the tests are quite noisy.

- usetesting
- predeclared
- staticcheck
- tenv
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are just deprecated now and I added one for tests.


const (
TestModePersist TestMode = "persistent"
// TestModeCleanup TestMode = "cleanup".
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need a system list READ(CRUD) to be able to run tests, that we can then clean up after we have inspected them.

@zacharyblasczyk zacharyblasczyk marked this pull request as ready for review March 9, 2025 05:19
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🧹 Nitpick comments (42)
client/client.go (1)

6-6: Consider version pinning for the remote API specification URL.

The change from a local OpenAPI file to a remote URL is a good approach for centralizing the API definition. However, using refs/heads/main means you're always pulling from the latest main branch, which could introduce unexpected changes when the upstream API spec is updated.

Consider using a specific commit hash instead of a branch reference for better stability and reproducible builds:

-//go:generate go run github.com/oapi-codegen/oapi-codegen/v2/cmd/oapi-codegen -config openapi.client.yaml https://raw.githubusercontent.com/ctrlplanedev/ctrlplane/refs/heads/main/openapi.v1.json
+//go:generate go run github.com/oapi-codegen/oapi-codegen/v2/cmd/oapi-codegen -config openapi.client.yaml https://raw.githubusercontent.com/ctrlplanedev/ctrlplane/<commit-hash>/openapi.v1.json

Also note that this change introduces a network dependency during builds, which could affect CI/CD reliability and offline development capabilities.

.env.example (1)

15-17: Duplicate environment variable definitions

The TF_ACC and TF_LOG variables are duplicated in the "Optional test configuration" section, which is redundant since they're already defined above with better comments.

-# Optional test configuration
-TF_ACC=1  # Set to 1 to run acceptance tests
-TF_LOG=INFO  # Set to DEBUG for verbose logging during tests 
internal/provider/utils.go (4)

35-43: Review symbol replacements in the Slugify function

Some of the symbol replacements don't actually change anything ('+' → '+' and '' → '' at lines 35 and 40). Consider either:

  1. Removing these redundant mappings, or
  2. Documenting why they're explicitly included while preserving the symbols
-		{"+", "+"},
+		// Keep the plus symbol as is
+		{"+", "plus"},  // Or remove this line if you want to keep + unchanged

-		{"*", "*"},
+		// Keep the asterisk symbol as is
+		{"*", "star"},  // Or remove this line if you want to keep * unchanged

68-74: Optimize regex compilation for better performance

The regex patterns are recompiled every time Slugify is called, which is inefficient for frequently called functions. Consider compiling these patterns once as package variables.

 // Slugify converts a string to a valid slug format.
 // It replaces spaces with hyphens, collapses multiple spaces to a single hyphen,
 // and converts to lowercase.
+
+var (
+	// Pre-compiled regex patterns for Slugify
+	spaceRegex = regexp.MustCompile(`\s+`)
+	hyphenRegex = regexp.MustCompile(`-+`)
+)
+
 func Slugify(s string) string {
     // Convert to lowercase
     s = strings.ToLower(s)
     
     // Rest of the implementation...
     
     // Replace spaces with hyphens
-    spaceReg := regexp.MustCompile(`\s+`)
-    s = spaceReg.ReplaceAllString(s, "-")
+    s = spaceRegex.ReplaceAllString(s, "-")
 
     // Replace consecutive hyphens with a single hyphen
-    reg := regexp.MustCompile(`-+`)
-    s = reg.ReplaceAllString(s, "-")
+    s = hyphenRegex.ReplaceAllString(s, "-")

133-135: Pre-compile the regex pattern for slug validation

Similar to the Slugify function, the regex pattern in ValidateString is compiled on each validation. For better performance, consider pre-compiling this pattern.

 // SlugValidator validates that a slug is in the correct format.
-type SlugValidator struct{}
+type SlugValidator struct{
+	// Pre-compiled regex pattern for slug validation
+	slugRegex *regexp.Regexp
+}
+
+// NewSlugValidator creates a new instance of SlugValidator with pre-compiled regex
+func NewSlugValidator() SlugValidator {
+	return SlugValidator{
+		slugRegex: regexp.MustCompile(`^[a-z0-9]([a-z0-9-]*[a-z0-9])?$`),
+	}
+}

// rest of the code...

func (v SlugValidator) ValidateString(ctx context.Context, req validator.StringRequest, resp *validator.StringResponse) {
	// Existing validation code...
	
-	pattern := `^[a-z0-9]([a-z0-9-]*[a-z0-9])?$`
-	regex := regexp.MustCompile(pattern)
-	if !regex.MatchString(value) {
+	if !v.slugRegex.MatchString(value) {
		tflog.Debug(ctx, "Slug format validation failed", map[string]interface{}{
			"value":   value,
			"path":    req.Path.String(),
-			"pattern": pattern,
+			"pattern": v.slugRegex.String(),
		})
		// Rest of validation code...

101-103: Avoid duplicate content in Description and MarkdownDescription

The Description and MarkdownDescription methods contain identical content. Consider creating a shared helper function to avoid duplication.

+// getSlugValidatorDescription returns the description text for the validator
+func (v SlugValidator) getSlugValidatorDescription() string {
+	return fmt.Sprintf("Validates that the slug is in the correct format (lowercase alphanumeric characters and hyphens, starting and ending with an alphanumeric character) and does not exceed %d characters.", MaxSlugLength)
+}

func (v SlugValidator) Description(ctx context.Context) string {
-	return fmt.Sprintf("Validates that the slug is in the correct format (lowercase alphanumeric characters and hyphens, starting and ending with an alphanumeric character) and does not exceed %d characters.", MaxSlugLength)
+	return v.getSlugValidatorDescription()
}

func (v SlugValidator) MarkdownDescription(ctx context.Context) string {
-	return fmt.Sprintf("Validates that the slug is in the correct format (lowercase alphanumeric characters and hyphens, starting and ending with an alphanumeric character) and does not exceed %d characters.", MaxSlugLength)
+	return v.getSlugValidatorDescription()
}
testing/acctest/provider.go (2)

37-38: Update hardcoded provider source and version

The provider configuration uses hardcoded values for source (hashicorp/ctrlplane) and version (0.0.1), which may not match the actual provider details. Consider making these configurable or using accurate values.

    ctrlplane = {
-      source = "hashicorp/ctrlplane"
-      version = "0.0.1"
+      source = "ctrlplanedev/ctrlplane"
+      # Version can be omitted for development testing
    }

51-59: Add BaseURL validation to PreCheck function

The PreCheck function validates the API key and workspace environment variables but doesn't check if the base URL is set. While a default value is used if empty, it might be good to at least log a warning.

func PreCheck(t *testing.T) {
	if v := os.Getenv(APIKeyEnvVar); v == "" {
		t.Fatalf("%s must be set for acceptance tests", APIKeyEnvVar)
	}
	if v := os.Getenv(WorkspaceEnvVar); v == "" {
		t.Fatalf("%s must be set for acceptance tests", WorkspaceEnvVar)
	}
+	if v := os.Getenv(BaseURLEnvVar); v == "" {
+		t.Logf("Warning: %s not set, using default base URL", BaseURLEnvVar)
+	}
}
internal/provider/retry.go (2)

20-43: Consider adding exponential backoff to the retry logic

The retry implementation is solid with good logging and timing controls. However, it uses a fixed delay between retries, which might not be optimal for all scenarios.

Consider implementing an exponential backoff strategy to improve resilience against transient issues:

func Retry(ctx context.Context, cfg RetryConfig, operation func() (bool, error)) error {
	startTime := time.Now()
	var lastErr error
	for i := 0; i < cfg.MaxRetries; i++ {
		if time.Since(startTime) > cfg.TotalWaitTime {
			tflog.Warn(ctx, "Exceeded total retry wait time", map[string]interface{}{
				"max_wait_seconds": cfg.TotalWaitTime.Seconds(),
			})
			break
		}
		done, err := operation()
		if done {
			return nil
		}
		lastErr = err
		tflog.Debug(ctx, "Retrying operation", map[string]interface{}{
			"retry": i + 1,
		})
-		time.Sleep(cfg.RetryDelay)
+		// Calculate backoff with jitter to prevent thundering herd
+		backoff := time.Duration(float64(cfg.RetryDelay) * math.Pow(2, float64(i)))
+		if backoff > cfg.TotalWaitTime {
+			backoff = cfg.TotalWaitTime
+		}
+		// Add jitter (±20%)
+		jitter := time.Duration(float64(backoff) * (0.8 + 0.4*rand.Float64()))
+		time.Sleep(jitter)
	}
	return lastErr
}

Don't forget to add the necessary imports:

import (
    "math"
    "math/rand"
)

22-22: Consider passing context to the operation function

The current operation function signature doesn't include the context parameter, which limits its ability to respect cancellation signals or deadlines.

- func Retry(ctx context.Context, cfg RetryConfig, operation func() (bool, error)) error {
+ func Retry(ctx context.Context, cfg RetryConfig, operation func(context.Context) (bool, error)) error {
    startTime := time.Now()
    var lastErr error
    for i := 0; i < cfg.MaxRetries; i++ {
        // ...
-       done, err := operation()
+       done, err := operation(ctx)
        // ...
    }
    return lastErr
}
docs/data-sources/environment.md (1)

44-87: Consider adding examples to clarify deeply nested structures

While the documentation for the nested conditions is comprehensive, deeply nested structures can be difficult for users to understand without examples.

Consider adding an example usage section showing how to define a resource filter with multiple nested conditions, similar to:

data "ctrlplane_environment" "example" {
  name      = "production"
  system_id = "sys-123456"

  # This will be populated after the data source is read
  # resource_filter will look something like:
  #   resource_filter = {
  #     type     = "selector"
  #     operator = "and"
  #     conditions = [
  #       {
  #         type     = "comparison"
  #         operator = "equals"
  #         key      = "environment"
  #         value    = "production"
  #       },
  #       {
  #         type     = "selector"
  #         operator = "or"
  #         conditions = [
  #           {
  #             type     = "comparison"
  #             operator = "equals"
  #             key      = "region"
  #             value    = "us-west"
  #           }
  #         ]
  #       }
  #     ]
  #   }
}
internal/provider/environment_model.go (1)

45-55: Consider reducing code duplication in SetDefaults methods

Both SetDefaults methods contain identical code. Consider extracting the common logic to a helper function to improve maintainability.

+func setEnvironmentDefaults(description *types.String, metadata *types.Map, releaseChannels *[]types.String) {
+	if description.IsNull() {
+		*description = types.StringValue("")
+	}
+	if metadata.IsNull() {
+		*metadata = types.MapNull(types.StringType)
+	}
+	if *releaseChannels == nil {
+		*releaseChannels = []types.String{}
+	}
+}

 func (e *EnvironmentModel) SetDefaults() {
-	if e.Description.IsNull() {
-		e.Description = types.StringValue("")
-	}
-	if e.Metadata.IsNull() {
-		e.Metadata = types.MapNull(types.StringType)
-	}
-	if e.ReleaseChannels == nil {
-		e.ReleaseChannels = []types.String{}
-	}
+	setEnvironmentDefaults(&e.Description, &e.Metadata, &e.ReleaseChannels)
 }

 func (e *EnvironmentDataSourceModel) SetDefaults() {
-	if e.Description.IsNull() {
-		e.Description = types.StringValue("")
-	}
-	if e.Metadata.IsNull() {
-		e.Metadata = types.MapNull(types.StringType)
-	}
-	if e.ReleaseChannels == nil {
-		e.ReleaseChannels = []types.String{}
-	}
+	setEnvironmentDefaults(&e.Description, &e.Metadata, &e.ReleaseChannels)
 }
testing/testutils/config.go (1)

33-42: CheckDestroy implementation could be more specific

The current implementation only checks if a resource of the specified type exists, but it doesn't verify if the specific resource instance was destroyed.

func CheckDestroy(s *terraform.State, resourceType string) error {
  for _, rs := range s.RootModule().Resources {
    if rs.Type != resourceType {
      continue
    }
-    return fmt.Errorf("resource %s still exists", resourceType)
+    return fmt.Errorf("resource %s with ID %s still exists", resourceType, rs.Primary.ID)
  }
  return nil
}
examples/resources/ctrlplane_resource_filter/main.tf (2)

35-50: Data source duplicates resource filter structure

The data source definition duplicates the exact same filter conditions as the resource. Consider using variables or local values to avoid duplication if this is part of the same configuration.


52-69: Consider uncommenting example code for better documentation

The commented sections contain valuable examples that demonstrate how to reference resource filters and utilize outputs. Consider uncommenting these sections to provide a more complete example, or add a note explaining why they're commented out.

Also applies to: 105-135

internal/provider/environment_schema.go (1)

16-69: Consider adding conflict detection between resource_filter and resource_filter_id

Your schema documentation mentions that resource_filter_id and resource_filter are mutually exclusive, but there's no validation to prevent users from specifying both. Consider adding explicit validation to check for this conflict.

You could implement this in the resource's ConfigValidators method:

// In the resource implementation
func (r *EnvironmentResource) ConfigValidators(ctx context.Context) []resource.ConfigValidator {
  return []resource.ConfigValidator{
    resourcevalidator.ConflictingAttributes(
      path.MatchRoot("resource_filter"),
      path.MatchRoot("resource_filter_id"),
    ),
  }
}
GNUmakefile (1)

86-90: Clean target is helpful but could be more comprehensive

The clean target removes common test artifacts but might miss some files. Consider adding patterns for additional test-related files or directories that might be generated.

clean:
	rm -f terraform.log
	rm -rf .terraform
	rm -f .terraform.lock.hcl
	rm -f terraform.tfstate*
+	rm -f plan.tfplan
+	rm -f crash.log
+	find . -name ".terraform" -type d -exec rm -rf {} +
internal/integration/environment_test.go (1)

38-108: Consider consolidating environment creation code into a reusable helper.

All these test blocks for creating different environments have nearly identical setup and validation logic (e.g., generating a name, metadata, making the API call, verifying the response, and cleaning up). Consolidating these steps into a generic helper could reduce duplication, improve readability, and make it easier to maintain these tests.

internal/integration/integration_suite_test.go (1)

20-24: Remove or refactor unused code for clarity.

Line 22 is commented out, suggesting a previous approach (TestModeCleanup) that is no longer employed. Removing or refining it helps keep the code streamlined.

internal/integration/test_helpers_test.go (5)

25-51: Consider trimming whitespace from workspace string.
If workspaceStr can have leading or trailing whitespace (for example, from environment variables), it may fail to parse as a UUID or slug unexpectedly. Consider strings.TrimSpace(workspaceStr) to preempt confusion.

 func getWorkspaceID(ctx context.Context, workspaceStr string, apiClient *client.ClientWithResponses) (uuid.UUID, error) {
+   workspaceStr = strings.TrimSpace(workspaceStr)
    Logger.Debug("getting workspace ID",

53-92: Increase randomness of test system naming to avoid collisions.
Currently, the system name uses 6 characters from a newly generated UUID. Under frequent test executions, collisions can occur. Consider using more characters or a timestamp.

 shortUUID := uuid.New().String()[:6]
-systemName := fmt.Sprintf("test-%s-%s", namePrefix, shortUUID)
+shortUUID := uuid.New().String()[:8]
+systemName := fmt.Sprintf("test-%s-%s", namePrefix, shortUUID)

94-114: Handle 404 responses during system deletion.
When the system is missing, returning a 404 from the API typically indicates that the system no longer exists. You may choose to treat a 404 as a successful deletion instead of an error, to ensure idempotent cleanup.

 _, err := apiClient.DeleteSystemWithResponse(ctx, systemID.String())
 if err != nil {
+   // If the error is a 404, consider logging and returning nil
    return fmt.Errorf("failed to delete system: %w", err)
 }

116-136: Same note for environment deletion: handle 404 responses.
Similar logic applies here: returning a 404 can be treated as a successful deletion, which simplifies cleanup for partially removed environments.

 _, err := apiClient.DeleteEnvironmentWithResponse(ctx, environmentID.String())
 if err != nil {
+   // If the error is a 404, consider logging and returning nil
    return fmt.Errorf("failed to delete environment: %w", err)
 }

152-167: Revisit ‘cleanup mode is no longer supported’ logic.
This function logs “cleanup mode is no longer supported,” but may still be necessary for older test pipelines. If truly unsupported, consider removing it or clarifying fallback behavior.

internal/provider/environment_resource.go (2)

55-256: Consider splitting the Create method into smaller functions.
The Create method contains complex logic for metadata, policy, and resource filter handling. Splitting it into helper functions (e.g., one for validating resource filter usage, another for preparing API payload, etc.) can improve readability and maintainability.


462-473: Confirm whether a 404 is acceptable on environment deletion.
If the environment no longer exists, the provider can ignore a 404 to maintain a consistent Terraform experience. This ensures repeated deletes do not fail.

internal/provider/resource_filter_schema.go (1)

17-277: Avoid duplicating logic between resource filters and data source filters.
Both resource and data source schemas share similar definitions. Consider refactoring common parts into a single helper to keep them consistent and reduce duplication.

internal/provider/environment_data_source.go (3)

98-103: Handle duplicate environment names gracefully.
This section stops searching after finding the first environment with a matching name. If multiple environments share the same name, the code may inadvertently return the wrong one. To avoid confusion, consider raising an error if more than one environment is found or providing a mechanism for users to specify a more explicit match.


129-137: Revisit handling of partially available environment data.
When certain environment fields are not yet filled (e.g., policyId, description, metadata), the code logs a warning and proceeds. For high-stakes operations, consider implementing a more robust check or retry mechanism to avoid unexpected incomplete data, or provide clear user-facing messaging about potential delays in data availability.


164-173: Reassess non-critical parsing errors.
If the resource filter fails to convert from the API, the code merely logs a warning and continues. This might mask issues if the resource filter is essential to correct operations. Consider converting this scenario to a hard failure (or at least a warning in Terraform output) so that users are immediately aware that the filter is invalid from the API’s perspective.

internal/provider/system_resource.go (1)

223-228: Sanitize sensitive error details in logs.
Here, you log the entire error object when encountering BadRequest. This might inadvertently reveal sensitive info. Consider sanitizing or limiting the logged data to protect sensitive user or system details.

internal/provider/environment_data_source_test.go (2)

48-57: Complement the complex filter tests with additional scenarios.
These tests thoroughly validate a multi-condition environment filter. For additional confidence, consider adding scenarios with zero conditions, nested filters, or invalid operators to ensure full coverage of filter parsing.


72-80: Add partial-data or intermittent availability tests.
You already verify missing required parameters and nonexistent environments. Including a scenario where the environment is in a partial or in-progress state can strengthen error-handling validation and highlight any race conditions with eventual consistency.

internal/provider/resource_filter.go (1)

89-105: Possible collision concern.
Using only the first 8 hex characters of a SHA-256 hash may create a small chance of collisions over many filters. Typically, this is fine for a short hash, but consider logging or verifying collisions if these hashes are used as unique identifiers in large-scale scenarios.

internal/provider/resource_filter_resource.go (4)

6-23: Global Registry Approach

The global registry with a sync.RWMutex is acceptable here, but be aware that relying on an in-memory registry in a Terraform provider can lead to state inconsistencies across multiple Terraform runs or parallel executions. If you need multi-process resilience, consider persisting registry data externally.


53-105: Extended Retry Logic

Increasing retry attempts to 10 with a 1-second delay can be helpful for ephemeral resources, but may also cause unintended slowdowns. Consider implementing an exponential backoff (e.g., doubling the delay each retry) to optimize performance and avoid fixed, potentially lengthy waits.


109-120: State-Only Resource

Declaring this resource as purely in-memory is clear. Just keep in mind that Terraform runs in multiple processes or machines could lose this state unless it’s stored externally.


324-352: Registry-Dependent Import

Relying on an in-memory registry for ImportState is fragile if the registry wasn’t populated by a previous run in the same process. Consider documenting this limitation or persisting the registry to ensure the import can succeed after process restarts.

examples/resources/ctrlplane_environment/main.tf (4)

191-223: Resource, Output, and Data: metadata_contains

Include a comment or note that metadata_contains might also match partial words—for users used to exact matches. Otherwise, it’s a good example of partial substring matching.


334-365: Resource, Output, and Data: provider_equals

Filters by provider with a contains operator referencing “gcp.” Possibly rename the resource to “provider_contains” if the operator is truly contains.

-resource "ctrlplane_environment" "provider_equals" {
+resource "ctrlplane_environment" "provider_contains" {

536-567: Resource, Output, and Data: created_at_before

Demonstrates time-based filtering. Hard-coded dates may be easy for examples, but consider parameterizing for real-world usage.


602-633: Resource, Output, and Data: created_at_before_or_on

Continues the time-based filter pattern. Ensure documentation clarifies inclusive vs. exclusive date boundaries.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ce7842d and fda5dc4.

⛔ Files ignored due to path filters (1)
  • go.sum is excluded by !**/*.sum
📒 Files selected for processing (43)
  • .env.example (1 hunks)
  • .github/workflows/test.yml (4 hunks)
  • .gitignore (1 hunks)
  • .golangci.yml (1 hunks)
  • GNUmakefile (1 hunks)
  • README.md (2 hunks)
  • client/client.gen.go (14 hunks)
  • client/client.go (1 hunks)
  • client/openapi.generate.sh (0 hunks)
  • client/openapi.v1.yaml (0 hunks)
  • docs/data-sources/environment.md (1 hunks)
  • docs/index.md (1 hunks)
  • docs/resources/environment.md (1 hunks)
  • docs/resources/resource_filter.md (1 hunks)
  • docs/resources/system.md (1 hunks)
  • examples/resources/ctrlplane_environment/main.tf (1 hunks)
  • examples/resources/ctrlplane_resource_filter/main.tf (1 hunks)
  • go.mod (1 hunks)
  • internal/integration/environment_test.go (1 hunks)
  • internal/integration/integration_suite_test.go (1 hunks)
  • internal/integration/test_helpers_test.go (1 hunks)
  • internal/provider/data_source_model.go (1 hunks)
  • internal/provider/environment_data_source.go (1 hunks)
  • internal/provider/environment_data_source_test.go (1 hunks)
  • internal/provider/environment_model.go (1 hunks)
  • internal/provider/environment_resource.go (1 hunks)
  • internal/provider/environment_resource_test.go (1 hunks)
  • internal/provider/environment_schema.go (1 hunks)
  • internal/provider/provider.go (11 hunks)
  • internal/provider/provider_test.go (2 hunks)
  • internal/provider/resource_filter.go (1 hunks)
  • internal/provider/resource_filter_resource.go (1 hunks)
  • internal/provider/resource_filter_resource_test.go (1 hunks)
  • internal/provider/resource_filter_schema.go (1 hunks)
  • internal/provider/retry.go (1 hunks)
  • internal/provider/system_resource.go (6 hunks)
  • internal/provider/system_resource_test.go (1 hunks)
  • internal/provider/utils.go (1 hunks)
  • internal/provider/utils_test.go (1 hunks)
  • terraform-provider-scaffolding-framework (0 hunks)
  • terraform-registry-manifest.json (1 hunks)
  • testing/acctest/provider.go (1 hunks)
  • testing/testutils/config.go (1 hunks)
💤 Files with no reviewable changes (3)
  • terraform-provider-scaffolding-framework
  • client/openapi.generate.sh
  • client/openapi.v1.yaml
🧰 Additional context used
🪛 LanguageTool
docs/index.md

[style] ~42-~42: To form a complete sentence, be sure to include a subject.
Context: ... The URL of the Ctrlplane API endpoint. Can be set in the CTRLPLANE_BASE_URL enviro...

(MISSING_IT_THERE)

🔇 Additional comments (178)
terraform-registry-manifest.json (1)

2-5: Consistent JSON formatting applied

The changes to this file are purely formatting-related, with consistent indentation applied to the JSON structure. There are no changes to the actual content or values in the manifest.

.golangci.yml (1)

20-20: New linter added: usetesting

Good addition of the usetesting linter, which ensures proper usage of the testing package in Go code. This is appropriate for a project with test files and will help maintain high-quality test implementations.

Note: The linters exportloopref and tenv appear to have been removed from the configuration based on the AI summary, but you previously acknowledged this was due to these linters being deprecated.

.github/workflows/test.yml (3)

9-9: String formatting standardization

The quotes around "README.md" have been changed from single quotes to double quotes to maintain consistent string formatting throughout the file.

Also applies to: 12-12


28-28: String formatting standardization for go-version-file

The quotes around "go.mod" have been changed from single quotes to double quotes, maintaining consistent string formatting throughout the file.

Also applies to: 43-43, 72-72


66-67: Updated Terraform version testing matrix

The Terraform versions tested in the CI workflow have been updated to focus on more recent versions (1.10.* and 1.11.*), removing older versions. This ensures the provider is tested against the versions most likely to be used by customers.

.env.example (1)

1-14: Well-structured environment variable example file

The .env.example file provides clear documentation of the required environment variables for running acceptance tests. The variables are well-organized by category (required, test configuration, Go test configuration) with helpful comments explaining their purpose.

internal/provider/provider_test.go (4)

13-13: Import for acctest package added

Good addition of the import for the acctest package, which supports the enhanced pre-check functionality.


25-28: Improved debug logging configuration

The code now sets the Terraform debug logging environment variable if it's not already set, which improves the diagnostic capabilities during test runs.


30-47: Enhanced environment variable validation

The implementation now uses a more structured approach to check required environment variables, with proper logging of their status. This makes the test setup more robust and easier to debug when issues occur.

The default setting for CTRLPLANE_BASE_URL is a good practice that maintains backward compatibility while enforcing the requirement.


49-50: Centralized pre-check handling

Good addition of the call to the common acctest.PreCheck(t) function, which likely contains shared validation logic. This promotes code reuse and consistency across tests.

internal/provider/provider.go (5)

76-79: Debug logging added for API key inclusion

Good addition of debug logging that shows when an API key is being added to the request without exposing the actual key. This improves traceability during troubleshooting.


85-87: Comprehensive debug logging added throughout the provider

Great addition of detailed debug logging throughout the provider. This will significantly help with troubleshooting issues during provider operations.

Also applies to: 94-97, 111-113, 120-123, 195-199, 218-220


165-166:

✅ Verification successful

Base URL updated to use ctrlplane.dev domain

The default base URL has been updated from app.ctrlplane.com to app.ctrlplane.dev, which aligns with the .env.example file. Ensure this change is consistent across all documentation and examples.


🏁 Script executed:

#!/bin/bash
# Check for other occurrences of the old domain in the codebase
echo "Checking for occurrences of old domain (app.ctrlplane.com):"
rg "app.ctrlplane.com" --type go --type markdown --type yaml

echo "Checking for occurrences of new domain (app.ctrlplane.dev):"
rg "app.ctrlplane.dev" --type go --type markdown --type yaml

Length of output: 1434


Confirmation: Base URL Update is Consistent

The change to use "https://app.ctrlplane.dev" as the default base URL has been verified across the codebase. All instances—including within internal/provider/provider.go, various test files, and the documentation (docs/index.md)—reflect this update, and no references to the old "app.ctrlplane.com" domain were found.

  • Verified Files:
    • internal/provider/provider.go (description and assignment)
    • Test files (internal/provider/system_resource_test.go, internal/provider/provider_test.go)
    • Documentation (docs/index.md)

231-237:

✅ Verification successful

New resources added to the provider

Two new resources have been added to the provider: NewEnvironmentResource and NewResourceFilterResource.


🏁 Script executed:

#!/bin/bash
# Check for the implementation of these new resources
echo "Searching for Environment resource implementation:"
rg -A 5 "func NewEnvironmentResource" --type go

echo "Searching for ResourceFilter resource implementation:"
rg -A 5 "func NewResourceFilterResource" --type go

Length of output: 1224


Verified: New Resource Implementation and Provider Updates
The new resources (NewEnvironmentResource and NewResourceFilterResource) have been confirmed. Their implementations exist in:

  • internal/provider/environment_resource.go
  • internal/provider/resource_filter_resource.go

The provider’s resource registration in internal/provider/provider.go correctly includes these resources. There are no further changes or issues detected in the snippet.


239-243:

✅ Verification successful

Data source addition for Environment

A new data source NewEnvironmentDataSource has been added.


🏁 Script executed:

#!/bin/bash
# Check for the implementation of the new data source
echo "Searching for Environment data source implementation:"
rg -A 5 "func NewEnvironmentDataSource" --type go

Length of output: 605


Data Source Implementation Verified
The new data source NewEnvironmentDataSource is confirmed to be correctly implemented in internal/provider/environment_data_source.go as expected from the registration in internal/provider/provider.go. No further modifications are needed.

  • Verified implementation in internal/provider/environment_data_source.go
  • Registration in internal/provider/provider.go aligns with the intended changes
client/client.gen.go (21)

27-37: Looks good: Enums for JobStatus.
These new constants for JobStatus are clearly named and consistent. No issues noted.


39-43: Looks good: Enums for JobWithTrigger approval statuses.
These new constants precisely capture approval states with clear naming.


92-98: New fields in Environment struct appear consistent.
The fields (CreatedAt, Description, Id, Metadata, Name, Policy, PolicyId) align well with the environment concept. No immediate concerns.


125-157: New JobWithTrigger struct appears well-structured.
It neatly encapsulates approval details and references to environment, deployment, etc. One caution: maps like Variables and JobAgentConfig are not concurrency-safe if shared across goroutines. Otherwise, this design is clear.


159-160: Defines JobWithTriggerApprovalStatus correctly.
This is consistent with the rest of the enumerated types.


298-326: CreateDeploymentJSONBody struct looks properly defined.
All fields seem relevant to deployment creation: name, slug, system ID, etc. No issues found.


484-486: CreateDeploymentJSONRequestBody alias is fine.
This mirrors the pattern seen elsewhere (aliasing the JSON body struct).


687-697: Interface methods for deployment creation and retrieval look consistent.
They match the naming convention and align with the newly added request/response logic.


818-828: CreateDeploymentWithBody method implementation is straightforward.
Reuses NewCreateDeploymentRequestWithBody, applies request editors, and sends. No issues.


830-840: CreateDeployment method follows the same pattern.
Uses NewCreateDeploymentRequest, sets context, and sends. Looks correct.


842-852: DeleteDeployment method logic is consistent.
No concerns; it follows the same request creation/editing pattern.


854-864: GetDeployment method logic is similarly consistent.
Everything appears to follow the established approach.


3173-3174: GetJobResponse updated to return JobWithTrigger.
This correctly matches the new struct. No issues.


3862-3870: CreateDeploymentWithBodyWithResponse parse wrapper added.
This wrapper function cleanly delegates to ParseCreateDeploymentResponse. Looks fine.


3871-3877: CreateDeploymentWithResponse wrapper is consistent.
It likewise delegates to ParseCreateDeploymentResponse. No concerns.


3879-3886: DeleteDeploymentWithResponse wrapper added.
Again, matches the standard pattern for parse delegation.


3888-3895: GetDeploymentWithResponse wrapper is consistent.
No issues found in how it handles ParseGetDeploymentResponse.


4280-4324: ParseCreateDeploymentResponse function.
Typical openapi-codegen logic. Uses strings.Contains to detect JSON content type—common in generated code, albeit HasPrefix might be slightly safer for potential charsets. No functional issue here.


4325-4367: ParseDeleteDeploymentResponse function.
Implementation mirrors the same pattern for response checks and JSON unmarshalling. No notable issues.


4369-4403: ParseGetDeploymentResponse function.
Matches the same generated pattern: checks status code, unmarshals JSON. No issues.


4699-4705: Case handling for 200 status with JobWithTrigger.
The addition of this branch and JobWithTrigger unmarshalling is consistent with the new job model. Looks correct.

go.mod (1)

6-16: Bulk dependency update looks good

The update of multiple dependencies to newer versions is good practice for security and feature improvements. Key framework updates include:

  • terraform-plugin-framework: v1.12.0 → v1.14.1
  • terraform-plugin-go: v0.25.0 → v0.26.0
  • Adding terraform-plugin-log v0.9.0
docs/index.md (1)

42-42: Domain update consistent with environment changes

The default value for base_url has been updated from https://app.ctrlplane.com to https://app.ctrlplane.dev, which aligns with the environment variables added in the README and likely represents a change in the service's domain.

🧰 Tools
🪛 LanguageTool

[style] ~42-~42: To form a complete sentence, be sure to include a subject.
Context: ... The URL of the Ctrlplane API endpoint. Can be set in the CTRLPLANE_BASE_URL enviro...

(MISSING_IT_THERE)

README.md (1)

62-68: Good addition of environment setup instructions

The addition of instructions for creating a .env file with the necessary environment variables for testing is helpful for contributors. This improves the developer experience and ensures consistent test setup.

docs/resources/system.md (1)

33-33: Improved user experience with auto-generated slugs

Moving the slug field from "Required" to "Optional" and updating its description to indicate automatic generation from the name is a good UX improvement. This allows users to omit the slug and have it auto-generated, reducing redundancy and potential errors.

internal/provider/retry.go (1)

13-18: Well-structured RetryConfig with clear parameters

The RetryConfig struct provides a clean interface for configuring retry behavior with all necessary parameters: maximum retries, total wait time, and delay between retries.

.gitignore (2)

37-39: Good security practice excluding environment and temporary files

Adding .env files to gitignore is important for security as these typically contain sensitive information like API keys. Excluding .out files and .cursor files also helps keep the repository clean.


41-45: Appropriate Terraform-specific exclusions

The additions for Terraform files are appropriate and follow best practices. Terraform state files often contain sensitive information and should not be committed to version control.

docs/data-sources/environment.md (4)

1-12: Well-structured documentation header

The documentation header provides clear identification of the data source with appropriate metadata and description.


18-22: Clear documentation of required attributes

The required attributes section is well-documented, clearly specifying that users must provide both the name and system_id to use this data source.


23-30: Comprehensive read-only attributes documentation

The read-only attributes section provides a thorough list of all properties that will be populated when the data source is read, giving users a clear understanding of what information will be available.


32-43: Well-structured nested schema documentation

The nested schema for resource_filter is clearly documented, showing all available attributes and their purposes.

internal/provider/data_source_model.go (3)

13-20: Enhanced DataSourceModel with additional fields

The DataSourceModel struct has been augmented with additional fields that will enable more comprehensive data source implementations. The new fields (ID, Name, ResourceFilter, and CustomAttributes) align with the documented attributes for the environment data source.


22-25: Well-defined CustomAttribute struct

The CustomAttribute struct is clearly defined with name-value fields, following Terraform's attribute pattern.


14-20:

❓ Verification inconclusive

Verify CustomAttributes usage in environment data source

The CustomAttributes field is added to the DataSourceModel, but it's not documented in the environment data source schema. Ensure this field is either used correctly or documented appropriately.


🏁 Script executed:

#!/bin/bash
# Check if CustomAttributes is used in environment data source implementation
grep -r "CustomAttributes" --include="*.go" internal/provider/environment_*.go

# Check if CustomAttributes is documented in any other data sources
grep -r "custom_attributes" --include="*.md" docs/data-sources/

Length of output: 370


Verify CustomAttributes usage across environment data sources

The recent script run didn’t return output confirming any usage or documentation of the CustomAttributes field in the environment data source code or the related docs. Please manually verify that this field is intentionally included in the DataSourceModel. If it’s required by your design, ensure its usage is correctly implemented and add corresponding documentation; otherwise, consider removing it to avoid potential confusion.

internal/provider/environment_model.go (3)

10-20: Well-structured data model definition for environment resource

The EnvironmentModel struct provides a comprehensive representation of an environment with appropriate Terraform framework field tags. The inclusion of both direct resource filter and reference via ID is a good pattern that allows for flexibility in resource configuration.


22-31: Clean separation between resource and data source models

Good practice to maintain separate model structs for resource and data source, even with similar fields. This separation allows for independent evolution of the models as requirements change.


33-43: Well-implemented default initialization

The SetDefaults method properly handles null/nil cases to prevent runtime errors, particularly important for the Terraform Plugin Framework which uses nullable types. The initialization of empty collections rather than nil is a good defensive coding practice.

docs/resources/resource_filter.md (1)

1-77: Comprehensive documentation for resource filter with clear schema definition

The documentation is well-structured and follows Terraform documentation standards. It clearly defines all required, optional, and read-only attributes, along with detailed descriptions of nested schemas. The hierarchical organization of nested conditions is particularly well done.

internal/provider/utils_test.go (3)

13-79: Thorough test cases for the Slugify function

The test cases cover a wide range of scenarios including simple strings, special characters, spaces, and edge cases. This comprehensive approach ensures the robustness of the slugification process across various inputs.


81-122: Good boundary testing for slug length validation

The test properly verifies both valid and invalid slug lengths, including edge cases like empty strings and the maximum allowed length. The test structure with clear case names and expected error flags makes the test intent obvious.


124-135: Important null value test for validator

Testing the validator's behavior with null values is essential, as this is a common case in Terraform where optional attributes might not be set. Good defensive testing approach.

docs/resources/environment.md (1)

1-94: Well-documented environment resource with clear constraints

The documentation comprehensively covers all aspects of the environment resource, with clear separation between required, optional, and read-only attributes. The constraint between resource_filter and resource_filter_id is explicitly mentioned, which is important for users to understand.

The nested schema documentation for resource filters is detailed and consistent with the standalone resource filter documentation, which maintains a cohesive user experience.

testing/testutils/config.go (4)

19-22: Provider factory setup looks good

The testAccProtoV6ProviderFactories variable is correctly set up to instantiate the provider during acceptance testing.


24-31: Well-structured test case wrapper

Good approach to centralize the provider configuration and pre-check function. This will help maintain consistency across tests.


44-58: CheckResourceExists function implementation is correct

The function properly verifies both the existence of the resource and that it has a valid ID.


60-67: ConfigCompose implementation is simple and effective

The function provides a clean way to combine multiple Terraform configuration blocks.

examples/resources/ctrlplane_resource_filter/main.tf (2)

17-33: Resource filter definition looks good

The resource filter is well structured with clear conditions for filtering StatefulSet resources that contain "postgres" in their name.


71-103: Inline filter example is well constructed

The inline filter example demonstrates an alternative approach with an "or" operator and multiple conditions, providing a good contrast to the referenced filter approach.

internal/provider/environment_schema.go (3)

45-51: Good use of plan modifiers for resource_filter_id

The stringplanmodifier.RequiresReplace() plan modifier correctly indicates that changes to this attribute require resource replacement, which is important for maintaining proper resource lifecycle.


52-60: Well-structured inline resource filter attribute

The resource_filter attribute is properly defined as a single nested attribute with appropriate plan modifiers. The comment explaining that changes to an inline filter force recreation is helpful for understanding the behavior.


70-140: Data source schema implementation is comprehensive

The data source schema provides all necessary attributes with appropriate descriptions and computed flags. The nested structure for resource filters is particularly well-defined.

GNUmakefile (3)

15-17: Good addition of client code generation target

Adding a specific target for client code generation helps maintain clarity in the build process.


28-42: Improved acceptance test configuration with environment variable handling

The enhanced testacc target correctly checks for the presence of a .env file and sources variables from it, providing better error messages when configuration is missing.


44-62: Useful quiet test mode for acceptance tests

The testacc-quiet target provides a less verbose option for running tests, which is helpful for local testing.

internal/integration/environment_test.go (1)

1050-1055: Helper function aligns with previous feedback on system list and cleanup.

Using safeDeleteTestSystem to remove test systems after each scenario appears to address the need for top-level CRUD operations on systems mentioned in earlier feedback. This approach ensures consistent cleanup and prevents stale resources from persisting. No concerns here.

internal/provider/environment_resource_test.go (1)

17-139: Extensive coverage for environment resource behavior.

This suite thoroughly validates environment configurations under varied scenarios (simple and complex filters, name/kind filtering, etc.), ensuring robust test coverage. Including disabled tests for unsupported operations is a good way to document future enhancements.

internal/integration/integration_suite_test.go (1)

79-98: Autocleanup mechanism is well-implemented.

Automatically cleaning up resources from prior test runs when TestModeAutoCleanup is active is an effective strategy to keep the test environment consistent without manual intervention.

internal/integration/test_helpers_test.go (2)

19-22: Ensure API key is never logged.
It’s good practice to confirm that the API key is not inadvertently exposed in logs. Although this function itself doesn’t log the key, be mindful of any future logging statements referencing request headers.


138-150:

✅ Verification successful

Review flexibility of API host URL.
Appending “/api” may not be desired in every scenario, especially if the user or integration tests rely on a different base path. Ensure this convention is aligned with the rest of the system.

Would you like to confirm that all tests and references assume “/api” as the correct path? If so, run:


🏁 Script executed:

#!/bin/bash
rg -A 3 '/api' 

Length of output: 2318


API Host URL Suffix Usage Verified

After reviewing the grep results, it’s clear that the system consistently enforces the “/api” suffix across multiple components (e.g., in both internal/integration/test_helpers_test.go and internal/provider/provider.go). This confirms that the tests and provider configuration rely on “/api” as the base path.

  • The function in internal/integration/test_helpers_test.go appends “/api” if it isn’t present.
  • A similar pattern is observed in internal/provider/provider.go.
  • Comments in client/client.gen.go further indicate that a relative path is expected to be appended to the given server URL.

If this behavior is intentional, the current approach is acceptable. However, if flexibility for alternative base paths is desired, consider parameterizing the API host suffix.

internal/provider/environment_data_source.go (1)

63-66: Consider allowing alternative identification strategies.
Currently, both system_id and name are strictly required. However, there may be use cases where a user wants to reference the environment by a unique environment ID alone or another identifier. Consider offering an alternative field or a fallback logic, enabling more flexible identification strategies.

internal/provider/system_resource.go (1)

327-348: Check concurrency edge cases for slug auto-generation.
Slug auto-generation triggers if the field is removed from the configuration. In high-throughput scenarios, two or more processes could attempt updates, potentially leading to collisions or inconsistent states. Consider adding uniqueness checks or more robust concurrency safeguards to avoid conflicts.

internal/provider/environment_data_source_test.go (1)

16-35: Tests provide solid coverage of basic environment data source usage.
The steps effectively validate attribute correctness in normal scenarios, demonstrating a well-structured test design.

internal/provider/resource_filter_resource_test.go (4)

1-14: No issues found.


15-94: Extensive coverage for resource lifecycle tests.
These steps provide a thorough sequence of create, import, update, and read checks across multiple scenarios (simple filter, name filter, complex conditions, operator changes, etc.). Great job ensuring that each attribute is validated and that import works correctly.


96-113: Proper error handling tests.
Validating error messages for missing and invalid fields demonstrates robust coverage. This ensures that the provider fails gracefully when encountering invalid configurations.


115-202: Config generation helpers look good.
These helper functions systematically generate different filter configurations for test cases, improving readability and maintainability. Consider grouping similar logic if the set of configurations grows larger in the future, but overall this is perfectly acceptable.

internal/provider/resource_filter.go (21)

1-15: No issues found.


16-71: Well-structured schema builder.
The function BuildResourceFilterNestedAttributes elegantly handles both leaf-level filters (when depth=0) and nested filters (depth>0). This is a neat approach for composable, recursive schema definitions.


72-72: No issues found.


73-81: ResourceFilterModel design is clear and flexible.
The typed fields and TFS SDK attributes effectively capture filter data.


82-82: No issues found.


106-106: No issues found.


107-120: Recursive initialization of conditions is correct.
Ensures that subconditions are also properly initialized. This approach reduces null-slice errors in deeper nested filters.


121-121: No issues found.


122-152: Robust filtering logic in ToAPIFilter.
Graceful handling of optional fields like operator, key, value, and not is commendable. Logging statements help debug filter transformations.


153-153: No issues found.


154-164: Good default assignment.
ValidateAPIFilter ensures a valid "type" key is always present. Useful fallback behavior.


165-165: No issues found.


166-197: Recursive conversion is neatly implemented.
convertConditionsToAPIFilter effectively processes nested conditions and gracefully handles empty values for comparison filters. Logging further aids in diagnosing issues.


198-198: No issues found.


199-285: Comprehensive parsing in FromAPIFilter.
The function carefully handles both top-level and nested conditions, populating each field with robust error handling. This is quite thorough.


286-286: No issues found.


287-293: Logical approach for defaulting empty value with comparison filters.
Helps maintain consistent behavior across nested conditions.


294-294: No issues found.


295-306: Maintains consistent nested structure.
PostProcessResourceFilter ensures that subconditions are preserved in a final consistent state. Straightforward and effective.


307-307: No issues found.


308-321: DeepFixResourceFilter approach.
The function ensures that comparison filters with subconditions always have an empty string value. This aligns with the API contract, preventing accidental data mismatch.

internal/provider/system_resource_test.go (41)

1-25: No issues found.


26-42: Flexible resource config function.
testAccSystemResourceConfig facilitates dynamic generation of HCL with optional description, making the tests more concise.


43-43: No issues found.


44-51: Valid approach for missing required fields scenario.
testAccSystemResourceConfigMissingRequired demonstrates a negative test to verify error handling when “name” is omitted.


52-52: No issues found.


53-62: Auto-generation test of slug.
Ensuring the provider gracefully handles omitted slug is an important acceptance test scenario.


63-64: No issues found.


65-67: Simple pointer helper.
stringPtr is a handy utility for optional fields in the test config.


68-68: No issues found.


69-81: Duplicate slug scenario.
testAccSystemResourceConfigDuplicateSlug clarifies expected failures when two resources share the same slug.


82-82: No issues found.


83-91: Invalid slug test.
Helps confirm validation logic catches whitespace (and presumably other invalid) slugs.


92-92: No issues found.


93-134: Custom destroy verification.
testAccCheckSystemDestroy ensures that the resource is truly removed from the external system. This direct client usage is a good pattern for acceptance tests.


135-135: No issues found.


136-225: Broad coverage in TestAccSystemResource.
Tests creation, import, updates (name, slug, description), and final read validations. The approach is well structured for acceptance testing with multiple resource steps.


226-226: No issues found.


227-271: Error handling coverage.
TestAccSystemResourceErrorHandling demonstrates multiple negative test scenarios (missing fields, invalid slug, duplicates). Good to see thorough coverage of potential user misconfigurations.


272-272: No issues found.


273-287: Explicit invalid slug check.
Reinforces slug validation with additional specialized scenario. Clear error expectation is crucial.


288-288: No issues found.


289-315: Robust import error tests.
Covers invalid IDs and scenario with a valid UUID that references a non-existent resource. Solid approach for import coverage.


316-316: No issues found.


317-394: Unit-level check for setSystemResourceData.
Ensures consistent data population with or without descriptions. This granular test is beneficial for diagnosing edge cases in the resource data models.


395-406: Verifying distinct slugs with same name.
testAccSystemResourceConfigSameNameDifferentSlug ensures that multiple resources with identical names still function as expected.


407-407: No issues found.


408-437: Same name, different slug acceptance test.
Well-designed scenario to confirm that the provider can handle multiple resources referencing the same name field while enforcing unique slugs.


438-438: No issues found.


439-458: Test for auto-generated slug from complex name.
Validates slugification logic for multi-word names. Nicely covers typical user input patterns.


459-459: No issues found.


460-479: Complex name slugification test.
Ensures spaces become hyphens. A strong validation for user-friendly name patterns.


480-480: No issues found.


481-496: Slug length constraint.
TestAccSystemResourceLongSlug verifies that the provider rejects automatically generated slugs exceeding the 30-character limit, preventing unexpected resource creation.


497-497: No issues found.


498-512: Explicitly testing overlong slug.
TestAccSystemResourceExplicitLongSlug further confirms the slug length validation for user-supplied values.


513-513: No issues found.


514-520: Config generator for overlong slug.
A concise function that sets up long slug scenarios.


521-521: No issues found.


522-556: Checking plan for overlong slug update.
TestAccSystemResourceUpdateWithLongSlug ensures the plan fails gracefully while preserving the original valid resource. This is a thoughtful approach to acceptance testing.


557-557: No issues found.


558-592: Resource replacement on slug change.
TestAccSystemResourceRecreateOnSlugChange effectively demonstrates that altering the slug triggers a new resource, matching expected lifecycle behavior for ephemeral IDs.

internal/provider/resource_filter_resource.go (11)

1-3: License Header Looks Good

No issues found in the new license header.


25-36: Idempotent Registry Initialization

The idempotent design of InitFilterRegistry is clear and safe. Good use of the mutex to prevent race conditions.


38-51: Deferred Lock Release

The code properly defers lock release and ensures the registry is initialized if needed. This is a solid concurrency handling pattern.


121-130: Resource State Model

The ResourceFilterResourceModel fields are well-defined and reflect the filter structure accurately.


132-134: Resource Type Name

Metadata sets the resource name to <ProviderTypeName>_resource_filter—a good, descriptive naming pattern that follows Terraform conventions.


136-173: Schema Completeness

The schema comprehensively covers all attributes needed for resource filtering. The usage of nested attributes for conditions is neatly organized.


175-177: Empty Configure

Leaving Configure empty is acceptable if external configuration isn’t needed.


179-239: Validations in Create

The approach to validating required attributes based on the filter type is logical. Error messages are clear. Just ensure that any future filter types also introduce the necessary validation blocks.


241-270: Re-Registration in Read

Re-registering the existing resource filter during Read ensures consistency in the in-memory registry. This is unusual for Terraform but suits a state-only resource approach.


272-304: ID Recalculation in Update

Hashing the filter again in Update may overwrite the ID if attributes are changed, effectively recreating a new registry entry. Ensure that is the intended behavior for your use case.

Can you confirm if generating a new ID on every change is the desired outcome? If not, we might need to use the existing ID and only re-hash for new resources.


306-322: Resource Deletion

Deleting the resource filter from the in-memory registry is straightforward. Good use of the mutex to avoid concurrency issues.

examples/resources/ctrlplane_environment/main.tf (23)

1-7: Terraform Configuration Block

The provider installation configuration is straightforward. No issues found here.


9-15: Example System Resource

Defines a system resource for demonstration. Clear naming and slug usage for the example.


17-56: Resource, Output, and Data: comparison_and

Demonstrates logical AND conditions for filtering resources. The example is illustrative of multi-condition filtering.


72-119: Resource, Output, and Data: comparison_or

Demonstrates logical OR conditions. The example is consistent with the previous block, showcasing alternative logic patterns.


123-155: Resource, Output, and Data: metadata_starts_with

Shows a starts-with operator usage for region-based filtering. The example is clear and would help users understand string-matching.


157-189: Resource, Output, and Data: metadata_ends_with

Similar approach but for ends-with filtering. Good coverage of string operators.


225-256: Resource, Output, and Data: metadata_regex

Regex usage is correct and flexible. Encouraging distinct examples for the portion of the pattern is a good teaching approach.


258-295: Resource, Output, and Data: metadata_null

Showcases an AND condition to find unlabeled resources. Logical approach.


299-330: Resource, Output, and Data: kind_equals

Demonstrates how to filter by an object’s kind. Straightforward usage.


369-400: Resource, Output, and Data: identifier_equals

Clear example showing single pod identification by name. No issues found.


402-433: Resource, Output, and Data: identifier_regex

Regex usage is correct—matching numeric suffixes. This thoroughly illustrates pattern-based selection.


435-466: Resource, Output, and Data: identifier_starts_with

Demonstrates discovery of services by prefix. Straightforward usage.


468-499: Resource, Output, and Data: identifier_ends_with

Complements the starts_with block for suffix-based filtering.


501-532: Resource, Output, and Data: identifier_contains

Illustrates partial matches for caching resources. Helpful example.


569-600: Resource, Output, and Data: created_at_after

Similar to created_at_before. Recommending the same approach for parameterization.


635-666: Resource, Output, and Data: created_at_after_or_on

Same feedback about specifying inclusivity if needed. Otherwise, nicely consistent with the other date-based filters.


670-701: Resource, Output, and Data: last_sync_before

A time-based filter using last-sync timing. Good demonstration for maintenance or drift detection usage.


703-734: Resource, Output, and Data: last_sync_after

Mirrors last_sync_before usage with an after operator. Example is correct.


736-767: Resource, Output, and Data: last_sync_before_or_on

Aligns with the other before-or-on usage for sync times.


769-800: Resource, Output, and Data: last_sync_after_or_on

Demonstrates a complete set of time-based operators for sync times.


804-846: Resource, Output, and Data: production_postgres

An AND-based filter combining multiple domains (kind + name). Good real-world demonstration.


848-906: Resource, Output, and Data: arm_nonprod

Nested AND/OR conditions to find target resources. This is a robust demonstration of combining multiple operations.


908-950: Resource, Output, and Data: new_high_memory

Demonstrates multi-condition filtering for high-memory instance types. Consistency looks good.

@zacharyblasczyk zacharyblasczyk merged commit 7c8b6b5 into main Mar 12, 2025
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants