-
Notifications
You must be signed in to change notification settings - Fork 0
feat: Environments #7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThis pull request introduces a wide range of changes across environment configuration, build processes, client libraries, documentation, Terraform provider resources, and testing. A new Changes
Sequence Diagram(s)sequenceDiagram
participant C as Client Library
participant API as API Server
participant P as Response Parser
C->>API: CreateDeployment (with request body)
API-->>C: HTTP response with deployment details
C->>P: ParseCreateDeploymentResponse
P-->>C: Parsed deployment response
sequenceDiagram
participant U as User/Terraform
participant TF as Terraform Engine
participant PR as Provider
participant API as API Client
U->>TF: Run "terraform apply"
TF->>PR: Invoke Environment Resource Create
PR->>API: Send API request to create environment
API-->>PR: Return environment details
PR->>TF: Respond with state configuration
Poem
Warning There were issues while running some tools. Please review the errors and either fix the tool’s configuration or disable the tool if it’s a critical failure. 🔧 golangci-lint (1.62.2)Error: can't load config: the Go language version (go1.23) used to build golangci-lint is lower than the targeted Go version (1.24.1) ✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
| package client | ||
|
|
||
| //go:generate go run github.com/oapi-codegen/oapi-codegen/v2/cmd/oapi-codegen -config openapi.client.yaml openapi.v1.yaml | ||
| //go:generate go run github.com/oapi-codegen/oapi-codegen/v2/cmd/oapi-codegen -config openapi.client.yaml https://raw.githubusercontent.com/ctrlplanedev/ctrlplane/refs/heads/main/openapi.v1.json |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the right way to generate the API spec.
| $${TEST:-./internal/provider/...} $${TESTARGS} | ||
|
|
||
| test-acceptance: | ||
| testacc-quiet: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Useful for local testing because the tests are quite noisy.
| - usetesting | ||
| - predeclared | ||
| - staticcheck | ||
| - tenv |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are just deprecated now and I added one for tests.
|
|
||
| const ( | ||
| TestModePersist TestMode = "persistent" | ||
| // TestModeCleanup TestMode = "cleanup". |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need a system list READ(CRUD) to be able to run tests, that we can then clean up after we have inspected them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🧹 Nitpick comments (42)
client/client.go (1)
6-6: Consider version pinning for the remote API specification URL.The change from a local OpenAPI file to a remote URL is a good approach for centralizing the API definition. However, using
refs/heads/mainmeans you're always pulling from the latest main branch, which could introduce unexpected changes when the upstream API spec is updated.Consider using a specific commit hash instead of a branch reference for better stability and reproducible builds:
-//go:generate go run github.com/oapi-codegen/oapi-codegen/v2/cmd/oapi-codegen -config openapi.client.yaml https://raw.githubusercontent.com/ctrlplanedev/ctrlplane/refs/heads/main/openapi.v1.json +//go:generate go run github.com/oapi-codegen/oapi-codegen/v2/cmd/oapi-codegen -config openapi.client.yaml https://raw.githubusercontent.com/ctrlplanedev/ctrlplane/<commit-hash>/openapi.v1.jsonAlso note that this change introduces a network dependency during builds, which could affect CI/CD reliability and offline development capabilities.
.env.example (1)
15-17: Duplicate environment variable definitionsThe
TF_ACCandTF_LOGvariables are duplicated in the "Optional test configuration" section, which is redundant since they're already defined above with better comments.-# Optional test configuration -TF_ACC=1 # Set to 1 to run acceptance tests -TF_LOG=INFO # Set to DEBUG for verbose logging during testsinternal/provider/utils.go (4)
35-43: Review symbol replacements in the Slugify functionSome of the symbol replacements don't actually change anything ('+' → '+' and '' → '' at lines 35 and 40). Consider either:
- Removing these redundant mappings, or
- Documenting why they're explicitly included while preserving the symbols
- {"+", "+"}, + // Keep the plus symbol as is + {"+", "plus"}, // Or remove this line if you want to keep + unchanged - {"*", "*"}, + // Keep the asterisk symbol as is + {"*", "star"}, // Or remove this line if you want to keep * unchanged
68-74: Optimize regex compilation for better performanceThe regex patterns are recompiled every time
Slugifyis called, which is inefficient for frequently called functions. Consider compiling these patterns once as package variables.// Slugify converts a string to a valid slug format. // It replaces spaces with hyphens, collapses multiple spaces to a single hyphen, // and converts to lowercase. + +var ( + // Pre-compiled regex patterns for Slugify + spaceRegex = regexp.MustCompile(`\s+`) + hyphenRegex = regexp.MustCompile(`-+`) +) + func Slugify(s string) string { // Convert to lowercase s = strings.ToLower(s) // Rest of the implementation... // Replace spaces with hyphens - spaceReg := regexp.MustCompile(`\s+`) - s = spaceReg.ReplaceAllString(s, "-") + s = spaceRegex.ReplaceAllString(s, "-") // Replace consecutive hyphens with a single hyphen - reg := regexp.MustCompile(`-+`) - s = reg.ReplaceAllString(s, "-") + s = hyphenRegex.ReplaceAllString(s, "-")
133-135: Pre-compile the regex pattern for slug validationSimilar to the
Slugifyfunction, the regex pattern inValidateStringis compiled on each validation. For better performance, consider pre-compiling this pattern.// SlugValidator validates that a slug is in the correct format. -type SlugValidator struct{} +type SlugValidator struct{ + // Pre-compiled regex pattern for slug validation + slugRegex *regexp.Regexp +} + +// NewSlugValidator creates a new instance of SlugValidator with pre-compiled regex +func NewSlugValidator() SlugValidator { + return SlugValidator{ + slugRegex: regexp.MustCompile(`^[a-z0-9]([a-z0-9-]*[a-z0-9])?$`), + } +} // rest of the code... func (v SlugValidator) ValidateString(ctx context.Context, req validator.StringRequest, resp *validator.StringResponse) { // Existing validation code... - pattern := `^[a-z0-9]([a-z0-9-]*[a-z0-9])?$` - regex := regexp.MustCompile(pattern) - if !regex.MatchString(value) { + if !v.slugRegex.MatchString(value) { tflog.Debug(ctx, "Slug format validation failed", map[string]interface{}{ "value": value, "path": req.Path.String(), - "pattern": pattern, + "pattern": v.slugRegex.String(), }) // Rest of validation code...
101-103: Avoid duplicate content in Description and MarkdownDescriptionThe
DescriptionandMarkdownDescriptionmethods contain identical content. Consider creating a shared helper function to avoid duplication.+// getSlugValidatorDescription returns the description text for the validator +func (v SlugValidator) getSlugValidatorDescription() string { + return fmt.Sprintf("Validates that the slug is in the correct format (lowercase alphanumeric characters and hyphens, starting and ending with an alphanumeric character) and does not exceed %d characters.", MaxSlugLength) +} func (v SlugValidator) Description(ctx context.Context) string { - return fmt.Sprintf("Validates that the slug is in the correct format (lowercase alphanumeric characters and hyphens, starting and ending with an alphanumeric character) and does not exceed %d characters.", MaxSlugLength) + return v.getSlugValidatorDescription() } func (v SlugValidator) MarkdownDescription(ctx context.Context) string { - return fmt.Sprintf("Validates that the slug is in the correct format (lowercase alphanumeric characters and hyphens, starting and ending with an alphanumeric character) and does not exceed %d characters.", MaxSlugLength) + return v.getSlugValidatorDescription() }testing/acctest/provider.go (2)
37-38: Update hardcoded provider source and versionThe provider configuration uses hardcoded values for source (
hashicorp/ctrlplane) and version (0.0.1), which may not match the actual provider details. Consider making these configurable or using accurate values.ctrlplane = { - source = "hashicorp/ctrlplane" - version = "0.0.1" + source = "ctrlplanedev/ctrlplane" + # Version can be omitted for development testing }
51-59: Add BaseURL validation to PreCheck functionThe
PreCheckfunction validates the API key and workspace environment variables but doesn't check if the base URL is set. While a default value is used if empty, it might be good to at least log a warning.func PreCheck(t *testing.T) { if v := os.Getenv(APIKeyEnvVar); v == "" { t.Fatalf("%s must be set for acceptance tests", APIKeyEnvVar) } if v := os.Getenv(WorkspaceEnvVar); v == "" { t.Fatalf("%s must be set for acceptance tests", WorkspaceEnvVar) } + if v := os.Getenv(BaseURLEnvVar); v == "" { + t.Logf("Warning: %s not set, using default base URL", BaseURLEnvVar) + } }internal/provider/retry.go (2)
20-43: Consider adding exponential backoff to the retry logicThe retry implementation is solid with good logging and timing controls. However, it uses a fixed delay between retries, which might not be optimal for all scenarios.
Consider implementing an exponential backoff strategy to improve resilience against transient issues:
func Retry(ctx context.Context, cfg RetryConfig, operation func() (bool, error)) error { startTime := time.Now() var lastErr error for i := 0; i < cfg.MaxRetries; i++ { if time.Since(startTime) > cfg.TotalWaitTime { tflog.Warn(ctx, "Exceeded total retry wait time", map[string]interface{}{ "max_wait_seconds": cfg.TotalWaitTime.Seconds(), }) break } done, err := operation() if done { return nil } lastErr = err tflog.Debug(ctx, "Retrying operation", map[string]interface{}{ "retry": i + 1, }) - time.Sleep(cfg.RetryDelay) + // Calculate backoff with jitter to prevent thundering herd + backoff := time.Duration(float64(cfg.RetryDelay) * math.Pow(2, float64(i))) + if backoff > cfg.TotalWaitTime { + backoff = cfg.TotalWaitTime + } + // Add jitter (±20%) + jitter := time.Duration(float64(backoff) * (0.8 + 0.4*rand.Float64())) + time.Sleep(jitter) } return lastErr }Don't forget to add the necessary imports:
import ( "math" "math/rand" )
22-22: Consider passing context to the operation functionThe current operation function signature doesn't include the context parameter, which limits its ability to respect cancellation signals or deadlines.
- func Retry(ctx context.Context, cfg RetryConfig, operation func() (bool, error)) error { + func Retry(ctx context.Context, cfg RetryConfig, operation func(context.Context) (bool, error)) error { startTime := time.Now() var lastErr error for i := 0; i < cfg.MaxRetries; i++ { // ... - done, err := operation() + done, err := operation(ctx) // ... } return lastErr }docs/data-sources/environment.md (1)
44-87: Consider adding examples to clarify deeply nested structuresWhile the documentation for the nested conditions is comprehensive, deeply nested structures can be difficult for users to understand without examples.
Consider adding an example usage section showing how to define a resource filter with multiple nested conditions, similar to:
data "ctrlplane_environment" "example" { name = "production" system_id = "sys-123456" # This will be populated after the data source is read # resource_filter will look something like: # resource_filter = { # type = "selector" # operator = "and" # conditions = [ # { # type = "comparison" # operator = "equals" # key = "environment" # value = "production" # }, # { # type = "selector" # operator = "or" # conditions = [ # { # type = "comparison" # operator = "equals" # key = "region" # value = "us-west" # } # ] # } # ] # } }internal/provider/environment_model.go (1)
45-55: Consider reducing code duplication in SetDefaults methodsBoth
SetDefaultsmethods contain identical code. Consider extracting the common logic to a helper function to improve maintainability.+func setEnvironmentDefaults(description *types.String, metadata *types.Map, releaseChannels *[]types.String) { + if description.IsNull() { + *description = types.StringValue("") + } + if metadata.IsNull() { + *metadata = types.MapNull(types.StringType) + } + if *releaseChannels == nil { + *releaseChannels = []types.String{} + } +} func (e *EnvironmentModel) SetDefaults() { - if e.Description.IsNull() { - e.Description = types.StringValue("") - } - if e.Metadata.IsNull() { - e.Metadata = types.MapNull(types.StringType) - } - if e.ReleaseChannels == nil { - e.ReleaseChannels = []types.String{} - } + setEnvironmentDefaults(&e.Description, &e.Metadata, &e.ReleaseChannels) } func (e *EnvironmentDataSourceModel) SetDefaults() { - if e.Description.IsNull() { - e.Description = types.StringValue("") - } - if e.Metadata.IsNull() { - e.Metadata = types.MapNull(types.StringType) - } - if e.ReleaseChannels == nil { - e.ReleaseChannels = []types.String{} - } + setEnvironmentDefaults(&e.Description, &e.Metadata, &e.ReleaseChannels) }testing/testutils/config.go (1)
33-42: CheckDestroy implementation could be more specificThe current implementation only checks if a resource of the specified type exists, but it doesn't verify if the specific resource instance was destroyed.
func CheckDestroy(s *terraform.State, resourceType string) error { for _, rs := range s.RootModule().Resources { if rs.Type != resourceType { continue } - return fmt.Errorf("resource %s still exists", resourceType) + return fmt.Errorf("resource %s with ID %s still exists", resourceType, rs.Primary.ID) } return nil }examples/resources/ctrlplane_resource_filter/main.tf (2)
35-50: Data source duplicates resource filter structureThe data source definition duplicates the exact same filter conditions as the resource. Consider using variables or local values to avoid duplication if this is part of the same configuration.
52-69: Consider uncommenting example code for better documentationThe commented sections contain valuable examples that demonstrate how to reference resource filters and utilize outputs. Consider uncommenting these sections to provide a more complete example, or add a note explaining why they're commented out.
Also applies to: 105-135
internal/provider/environment_schema.go (1)
16-69: Consider adding conflict detection between resource_filter and resource_filter_idYour schema documentation mentions that
resource_filter_idandresource_filterare mutually exclusive, but there's no validation to prevent users from specifying both. Consider adding explicit validation to check for this conflict.You could implement this in the resource's
ConfigValidatorsmethod:// In the resource implementation func (r *EnvironmentResource) ConfigValidators(ctx context.Context) []resource.ConfigValidator { return []resource.ConfigValidator{ resourcevalidator.ConflictingAttributes( path.MatchRoot("resource_filter"), path.MatchRoot("resource_filter_id"), ), } }GNUmakefile (1)
86-90: Clean target is helpful but could be more comprehensiveThe
cleantarget removes common test artifacts but might miss some files. Consider adding patterns for additional test-related files or directories that might be generated.clean: rm -f terraform.log rm -rf .terraform rm -f .terraform.lock.hcl rm -f terraform.tfstate* + rm -f plan.tfplan + rm -f crash.log + find . -name ".terraform" -type d -exec rm -rf {} +internal/integration/environment_test.go (1)
38-108: Consider consolidating environment creation code into a reusable helper.All these test blocks for creating different environments have nearly identical setup and validation logic (e.g., generating a name, metadata, making the API call, verifying the response, and cleaning up). Consolidating these steps into a generic helper could reduce duplication, improve readability, and make it easier to maintain these tests.
internal/integration/integration_suite_test.go (1)
20-24: Remove or refactor unused code for clarity.Line 22 is commented out, suggesting a previous approach (
TestModeCleanup) that is no longer employed. Removing or refining it helps keep the code streamlined.internal/integration/test_helpers_test.go (5)
25-51: Consider trimming whitespace from workspace string.
IfworkspaceStrcan have leading or trailing whitespace (for example, from environment variables), it may fail to parse as a UUID or slug unexpectedly. Considerstrings.TrimSpace(workspaceStr)to preempt confusion.func getWorkspaceID(ctx context.Context, workspaceStr string, apiClient *client.ClientWithResponses) (uuid.UUID, error) { + workspaceStr = strings.TrimSpace(workspaceStr) Logger.Debug("getting workspace ID",
53-92: Increase randomness of test system naming to avoid collisions.
Currently, the system name uses 6 characters from a newly generated UUID. Under frequent test executions, collisions can occur. Consider using more characters or a timestamp.shortUUID := uuid.New().String()[:6] -systemName := fmt.Sprintf("test-%s-%s", namePrefix, shortUUID) +shortUUID := uuid.New().String()[:8] +systemName := fmt.Sprintf("test-%s-%s", namePrefix, shortUUID)
94-114: Handle 404 responses during system deletion.
When the system is missing, returning a 404 from the API typically indicates that the system no longer exists. You may choose to treat a 404 as a successful deletion instead of an error, to ensure idempotent cleanup._, err := apiClient.DeleteSystemWithResponse(ctx, systemID.String()) if err != nil { + // If the error is a 404, consider logging and returning nil return fmt.Errorf("failed to delete system: %w", err) }
116-136: Same note for environment deletion: handle 404 responses.
Similar logic applies here: returning a 404 can be treated as a successful deletion, which simplifies cleanup for partially removed environments._, err := apiClient.DeleteEnvironmentWithResponse(ctx, environmentID.String()) if err != nil { + // If the error is a 404, consider logging and returning nil return fmt.Errorf("failed to delete environment: %w", err) }
152-167: Revisit ‘cleanup mode is no longer supported’ logic.
This function logs “cleanup mode is no longer supported,” but may still be necessary for older test pipelines. If truly unsupported, consider removing it or clarifying fallback behavior.internal/provider/environment_resource.go (2)
55-256: Consider splitting the Create method into smaller functions.
TheCreatemethod contains complex logic for metadata, policy, and resource filter handling. Splitting it into helper functions (e.g., one for validating resource filter usage, another for preparing API payload, etc.) can improve readability and maintainability.
462-473: Confirm whether a 404 is acceptable on environment deletion.
If the environment no longer exists, the provider can ignore a 404 to maintain a consistent Terraform experience. This ensures repeated deletes do not fail.internal/provider/resource_filter_schema.go (1)
17-277: Avoid duplicating logic between resource filters and data source filters.
Both resource and data source schemas share similar definitions. Consider refactoring common parts into a single helper to keep them consistent and reduce duplication.internal/provider/environment_data_source.go (3)
98-103: Handle duplicate environment names gracefully.
This section stops searching after finding the first environment with a matching name. If multiple environments share the same name, the code may inadvertently return the wrong one. To avoid confusion, consider raising an error if more than one environment is found or providing a mechanism for users to specify a more explicit match.
129-137: Revisit handling of partially available environment data.
When certain environment fields are not yet filled (e.g.,policyId,description,metadata), the code logs a warning and proceeds. For high-stakes operations, consider implementing a more robust check or retry mechanism to avoid unexpected incomplete data, or provide clear user-facing messaging about potential delays in data availability.
164-173: Reassess non-critical parsing errors.
If the resource filter fails to convert from the API, the code merely logs a warning and continues. This might mask issues if the resource filter is essential to correct operations. Consider converting this scenario to a hard failure (or at least a warning in Terraform output) so that users are immediately aware that the filter is invalid from the API’s perspective.internal/provider/system_resource.go (1)
223-228: Sanitize sensitive error details in logs.
Here, you log the entireerrorobject when encounteringBadRequest. This might inadvertently reveal sensitive info. Consider sanitizing or limiting the logged data to protect sensitive user or system details.internal/provider/environment_data_source_test.go (2)
48-57: Complement the complex filter tests with additional scenarios.
These tests thoroughly validate a multi-condition environment filter. For additional confidence, consider adding scenarios with zero conditions, nested filters, or invalid operators to ensure full coverage of filter parsing.
72-80: Add partial-data or intermittent availability tests.
You already verify missing required parameters and nonexistent environments. Including a scenario where the environment is in a partial or in-progress state can strengthen error-handling validation and highlight any race conditions with eventual consistency.internal/provider/resource_filter.go (1)
89-105: Possible collision concern.
Using only the first 8 hex characters of a SHA-256 hash may create a small chance of collisions over many filters. Typically, this is fine for a short hash, but consider logging or verifying collisions if these hashes are used as unique identifiers in large-scale scenarios.internal/provider/resource_filter_resource.go (4)
6-23: Global Registry ApproachThe global registry with a sync.RWMutex is acceptable here, but be aware that relying on an in-memory registry in a Terraform provider can lead to state inconsistencies across multiple Terraform runs or parallel executions. If you need multi-process resilience, consider persisting registry data externally.
53-105: Extended Retry LogicIncreasing retry attempts to 10 with a 1-second delay can be helpful for ephemeral resources, but may also cause unintended slowdowns. Consider implementing an exponential backoff (e.g., doubling the delay each retry) to optimize performance and avoid fixed, potentially lengthy waits.
109-120: State-Only ResourceDeclaring this resource as purely in-memory is clear. Just keep in mind that Terraform runs in multiple processes or machines could lose this state unless it’s stored externally.
324-352: Registry-Dependent ImportRelying on an in-memory registry for
ImportStateis fragile if the registry wasn’t populated by a previous run in the same process. Consider documenting this limitation or persisting the registry to ensure the import can succeed after process restarts.examples/resources/ctrlplane_environment/main.tf (4)
191-223: Resource, Output, and Data:metadata_containsInclude a comment or note that
metadata_containsmight also match partial words—for users used to exact matches. Otherwise, it’s a good example of partial substring matching.
334-365: Resource, Output, and Data:provider_equalsFilters by provider with a
containsoperator referencing “gcp.” Possibly rename the resource to “provider_contains” if the operator is trulycontains.-resource "ctrlplane_environment" "provider_equals" { +resource "ctrlplane_environment" "provider_contains" {
536-567: Resource, Output, and Data:created_at_beforeDemonstrates time-based filtering. Hard-coded dates may be easy for examples, but consider parameterizing for real-world usage.
602-633: Resource, Output, and Data:created_at_before_or_onContinues the time-based filter pattern. Ensure documentation clarifies inclusive vs. exclusive date boundaries.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (43)
.env.example(1 hunks).github/workflows/test.yml(4 hunks).gitignore(1 hunks).golangci.yml(1 hunks)GNUmakefile(1 hunks)README.md(2 hunks)client/client.gen.go(14 hunks)client/client.go(1 hunks)client/openapi.generate.sh(0 hunks)client/openapi.v1.yaml(0 hunks)docs/data-sources/environment.md(1 hunks)docs/index.md(1 hunks)docs/resources/environment.md(1 hunks)docs/resources/resource_filter.md(1 hunks)docs/resources/system.md(1 hunks)examples/resources/ctrlplane_environment/main.tf(1 hunks)examples/resources/ctrlplane_resource_filter/main.tf(1 hunks)go.mod(1 hunks)internal/integration/environment_test.go(1 hunks)internal/integration/integration_suite_test.go(1 hunks)internal/integration/test_helpers_test.go(1 hunks)internal/provider/data_source_model.go(1 hunks)internal/provider/environment_data_source.go(1 hunks)internal/provider/environment_data_source_test.go(1 hunks)internal/provider/environment_model.go(1 hunks)internal/provider/environment_resource.go(1 hunks)internal/provider/environment_resource_test.go(1 hunks)internal/provider/environment_schema.go(1 hunks)internal/provider/provider.go(11 hunks)internal/provider/provider_test.go(2 hunks)internal/provider/resource_filter.go(1 hunks)internal/provider/resource_filter_resource.go(1 hunks)internal/provider/resource_filter_resource_test.go(1 hunks)internal/provider/resource_filter_schema.go(1 hunks)internal/provider/retry.go(1 hunks)internal/provider/system_resource.go(6 hunks)internal/provider/system_resource_test.go(1 hunks)internal/provider/utils.go(1 hunks)internal/provider/utils_test.go(1 hunks)terraform-provider-scaffolding-framework(0 hunks)terraform-registry-manifest.json(1 hunks)testing/acctest/provider.go(1 hunks)testing/testutils/config.go(1 hunks)
💤 Files with no reviewable changes (3)
- terraform-provider-scaffolding-framework
- client/openapi.generate.sh
- client/openapi.v1.yaml
🧰 Additional context used
🪛 LanguageTool
docs/index.md
[style] ~42-~42: To form a complete sentence, be sure to include a subject.
Context: ... The URL of the Ctrlplane API endpoint. Can be set in the CTRLPLANE_BASE_URL enviro...
(MISSING_IT_THERE)
🔇 Additional comments (178)
terraform-registry-manifest.json (1)
2-5: Consistent JSON formatting appliedThe changes to this file are purely formatting-related, with consistent indentation applied to the JSON structure. There are no changes to the actual content or values in the manifest.
.golangci.yml (1)
20-20: New linter added: usetestingGood addition of the
usetestinglinter, which ensures proper usage of the testing package in Go code. This is appropriate for a project with test files and will help maintain high-quality test implementations.Note: The linters
exportlooprefandtenvappear to have been removed from the configuration based on the AI summary, but you previously acknowledged this was due to these linters being deprecated..github/workflows/test.yml (3)
9-9: String formatting standardizationThe quotes around "README.md" have been changed from single quotes to double quotes to maintain consistent string formatting throughout the file.
Also applies to: 12-12
28-28: String formatting standardization for go-version-fileThe quotes around "go.mod" have been changed from single quotes to double quotes, maintaining consistent string formatting throughout the file.
Also applies to: 43-43, 72-72
66-67: Updated Terraform version testing matrixThe Terraform versions tested in the CI workflow have been updated to focus on more recent versions (1.10.* and 1.11.*), removing older versions. This ensures the provider is tested against the versions most likely to be used by customers.
.env.example (1)
1-14: Well-structured environment variable example fileThe
.env.examplefile provides clear documentation of the required environment variables for running acceptance tests. The variables are well-organized by category (required, test configuration, Go test configuration) with helpful comments explaining their purpose.internal/provider/provider_test.go (4)
13-13: Import for acctest package addedGood addition of the import for the
acctestpackage, which supports the enhanced pre-check functionality.
25-28: Improved debug logging configurationThe code now sets the Terraform debug logging environment variable if it's not already set, which improves the diagnostic capabilities during test runs.
30-47: Enhanced environment variable validationThe implementation now uses a more structured approach to check required environment variables, with proper logging of their status. This makes the test setup more robust and easier to debug when issues occur.
The default setting for
CTRLPLANE_BASE_URLis a good practice that maintains backward compatibility while enforcing the requirement.
49-50: Centralized pre-check handlingGood addition of the call to the common
acctest.PreCheck(t)function, which likely contains shared validation logic. This promotes code reuse and consistency across tests.internal/provider/provider.go (5)
76-79: Debug logging added for API key inclusionGood addition of debug logging that shows when an API key is being added to the request without exposing the actual key. This improves traceability during troubleshooting.
85-87: Comprehensive debug logging added throughout the providerGreat addition of detailed debug logging throughout the provider. This will significantly help with troubleshooting issues during provider operations.
Also applies to: 94-97, 111-113, 120-123, 195-199, 218-220
165-166:✅ Verification successful
Base URL updated to use ctrlplane.dev domain
The default base URL has been updated from
app.ctrlplane.comtoapp.ctrlplane.dev, which aligns with the .env.example file. Ensure this change is consistent across all documentation and examples.
🏁 Script executed:
#!/bin/bash # Check for other occurrences of the old domain in the codebase echo "Checking for occurrences of old domain (app.ctrlplane.com):" rg "app.ctrlplane.com" --type go --type markdown --type yaml echo "Checking for occurrences of new domain (app.ctrlplane.dev):" rg "app.ctrlplane.dev" --type go --type markdown --type yamlLength of output: 1434
Confirmation: Base URL Update is Consistent
The change to use
"https://app.ctrlplane.dev"as the default base URL has been verified across the codebase. All instances—including withininternal/provider/provider.go, various test files, and the documentation (docs/index.md)—reflect this update, and no references to the old"app.ctrlplane.com"domain were found.
- Verified Files:
internal/provider/provider.go(description and assignment)- Test files (
internal/provider/system_resource_test.go,internal/provider/provider_test.go)- Documentation (
docs/index.md)
231-237:✅ Verification successful
New resources added to the provider
Two new resources have been added to the provider:
NewEnvironmentResourceandNewResourceFilterResource.
🏁 Script executed:
#!/bin/bash # Check for the implementation of these new resources echo "Searching for Environment resource implementation:" rg -A 5 "func NewEnvironmentResource" --type go echo "Searching for ResourceFilter resource implementation:" rg -A 5 "func NewResourceFilterResource" --type goLength of output: 1224
Verified: New Resource Implementation and Provider Updates
The new resources (NewEnvironmentResourceandNewResourceFilterResource) have been confirmed. Their implementations exist in:
internal/provider/environment_resource.gointernal/provider/resource_filter_resource.goThe provider’s resource registration in
internal/provider/provider.gocorrectly includes these resources. There are no further changes or issues detected in the snippet.
239-243:✅ Verification successful
Data source addition for Environment
A new data source
NewEnvironmentDataSourcehas been added.
🏁 Script executed:
#!/bin/bash # Check for the implementation of the new data source echo "Searching for Environment data source implementation:" rg -A 5 "func NewEnvironmentDataSource" --type goLength of output: 605
Data Source Implementation Verified
The new data sourceNewEnvironmentDataSourceis confirmed to be correctly implemented ininternal/provider/environment_data_source.goas expected from the registration ininternal/provider/provider.go. No further modifications are needed.
- Verified implementation in
internal/provider/environment_data_source.go- Registration in
internal/provider/provider.goaligns with the intended changesclient/client.gen.go (21)
27-37: Looks good: Enums for JobStatus.
These new constants forJobStatusare clearly named and consistent. No issues noted.
39-43: Looks good: Enums for JobWithTrigger approval statuses.
These new constants precisely capture approval states with clear naming.
92-98: New fields inEnvironmentstruct appear consistent.
The fields (CreatedAt,Description,Id,Metadata,Name,Policy,PolicyId) align well with the environment concept. No immediate concerns.
125-157: NewJobWithTriggerstruct appears well-structured.
It neatly encapsulates approval details and references to environment, deployment, etc. One caution: maps likeVariablesandJobAgentConfigare not concurrency-safe if shared across goroutines. Otherwise, this design is clear.
159-160: DefinesJobWithTriggerApprovalStatuscorrectly.
This is consistent with the rest of the enumerated types.
298-326:CreateDeploymentJSONBodystruct looks properly defined.
All fields seem relevant to deployment creation: name, slug, system ID, etc. No issues found.
484-486:CreateDeploymentJSONRequestBodyalias is fine.
This mirrors the pattern seen elsewhere (aliasing the JSON body struct).
687-697: Interface methods for deployment creation and retrieval look consistent.
They match the naming convention and align with the newly added request/response logic.
818-828:CreateDeploymentWithBodymethod implementation is straightforward.
ReusesNewCreateDeploymentRequestWithBody, applies request editors, and sends. No issues.
830-840:CreateDeploymentmethod follows the same pattern.
UsesNewCreateDeploymentRequest, sets context, and sends. Looks correct.
842-852:DeleteDeploymentmethod logic is consistent.
No concerns; it follows the same request creation/editing pattern.
854-864:GetDeploymentmethod logic is similarly consistent.
Everything appears to follow the established approach.
3173-3174:GetJobResponseupdated to returnJobWithTrigger.
This correctly matches the new struct. No issues.
3862-3870:CreateDeploymentWithBodyWithResponseparse wrapper added.
This wrapper function cleanly delegates toParseCreateDeploymentResponse. Looks fine.
3871-3877:CreateDeploymentWithResponsewrapper is consistent.
It likewise delegates toParseCreateDeploymentResponse. No concerns.
3879-3886:DeleteDeploymentWithResponsewrapper added.
Again, matches the standard pattern for parse delegation.
3888-3895:GetDeploymentWithResponsewrapper is consistent.
No issues found in how it handlesParseGetDeploymentResponse.
4280-4324:ParseCreateDeploymentResponsefunction.
Typical openapi-codegen logic. Usesstrings.Containsto detect JSON content type—common in generated code, albeitHasPrefixmight be slightly safer for potential charsets. No functional issue here.
4325-4367:ParseDeleteDeploymentResponsefunction.
Implementation mirrors the same pattern for response checks and JSON unmarshalling. No notable issues.
4369-4403:ParseGetDeploymentResponsefunction.
Matches the same generated pattern: checks status code, unmarshals JSON. No issues.
4699-4705: Case handling for 200 status withJobWithTrigger.
The addition of this branch andJobWithTriggerunmarshalling is consistent with the new job model. Looks correct.go.mod (1)
6-16: Bulk dependency update looks goodThe update of multiple dependencies to newer versions is good practice for security and feature improvements. Key framework updates include:
- terraform-plugin-framework: v1.12.0 → v1.14.1
- terraform-plugin-go: v0.25.0 → v0.26.0
- Adding terraform-plugin-log v0.9.0
docs/index.md (1)
42-42: Domain update consistent with environment changesThe default value for
base_urlhas been updated fromhttps://app.ctrlplane.comtohttps://app.ctrlplane.dev, which aligns with the environment variables added in the README and likely represents a change in the service's domain.🧰 Tools
🪛 LanguageTool
[style] ~42-~42: To form a complete sentence, be sure to include a subject.
Context: ... The URL of the Ctrlplane API endpoint. Can be set in the CTRLPLANE_BASE_URL enviro...(MISSING_IT_THERE)
README.md (1)
62-68: Good addition of environment setup instructionsThe addition of instructions for creating a
.envfile with the necessary environment variables for testing is helpful for contributors. This improves the developer experience and ensures consistent test setup.docs/resources/system.md (1)
33-33: Improved user experience with auto-generated slugsMoving the
slugfield from "Required" to "Optional" and updating its description to indicate automatic generation from the name is a good UX improvement. This allows users to omit the slug and have it auto-generated, reducing redundancy and potential errors.internal/provider/retry.go (1)
13-18: Well-structured RetryConfig with clear parametersThe
RetryConfigstruct provides a clean interface for configuring retry behavior with all necessary parameters: maximum retries, total wait time, and delay between retries..gitignore (2)
37-39: Good security practice excluding environment and temporary filesAdding
.envfiles to gitignore is important for security as these typically contain sensitive information like API keys. Excluding.outfiles and.cursorfiles also helps keep the repository clean.
41-45: Appropriate Terraform-specific exclusionsThe additions for Terraform files are appropriate and follow best practices. Terraform state files often contain sensitive information and should not be committed to version control.
docs/data-sources/environment.md (4)
1-12: Well-structured documentation headerThe documentation header provides clear identification of the data source with appropriate metadata and description.
18-22: Clear documentation of required attributesThe required attributes section is well-documented, clearly specifying that users must provide both the
nameandsystem_idto use this data source.
23-30: Comprehensive read-only attributes documentationThe read-only attributes section provides a thorough list of all properties that will be populated when the data source is read, giving users a clear understanding of what information will be available.
32-43: Well-structured nested schema documentationThe nested schema for
resource_filteris clearly documented, showing all available attributes and their purposes.internal/provider/data_source_model.go (3)
13-20: Enhanced DataSourceModel with additional fieldsThe
DataSourceModelstruct has been augmented with additional fields that will enable more comprehensive data source implementations. The new fields (ID,Name,ResourceFilter, andCustomAttributes) align with the documented attributes for the environment data source.
22-25: Well-defined CustomAttribute structThe
CustomAttributestruct is clearly defined with name-value fields, following Terraform's attribute pattern.
14-20:❓ Verification inconclusive
Verify CustomAttributes usage in environment data source
The
CustomAttributesfield is added to theDataSourceModel, but it's not documented in the environment data source schema. Ensure this field is either used correctly or documented appropriately.
🏁 Script executed:
#!/bin/bash # Check if CustomAttributes is used in environment data source implementation grep -r "CustomAttributes" --include="*.go" internal/provider/environment_*.go # Check if CustomAttributes is documented in any other data sources grep -r "custom_attributes" --include="*.md" docs/data-sources/Length of output: 370
Verify CustomAttributes usage across environment data sources
The recent script run didn’t return output confirming any usage or documentation of the
CustomAttributesfield in the environment data source code or the related docs. Please manually verify that this field is intentionally included in theDataSourceModel. If it’s required by your design, ensure its usage is correctly implemented and add corresponding documentation; otherwise, consider removing it to avoid potential confusion.internal/provider/environment_model.go (3)
10-20: Well-structured data model definition for environment resourceThe
EnvironmentModelstruct provides a comprehensive representation of an environment with appropriate Terraform framework field tags. The inclusion of both direct resource filter and reference via ID is a good pattern that allows for flexibility in resource configuration.
22-31: Clean separation between resource and data source modelsGood practice to maintain separate model structs for resource and data source, even with similar fields. This separation allows for independent evolution of the models as requirements change.
33-43: Well-implemented default initializationThe
SetDefaultsmethod properly handles null/nil cases to prevent runtime errors, particularly important for the Terraform Plugin Framework which uses nullable types. The initialization of empty collections rather than nil is a good defensive coding practice.docs/resources/resource_filter.md (1)
1-77: Comprehensive documentation for resource filter with clear schema definitionThe documentation is well-structured and follows Terraform documentation standards. It clearly defines all required, optional, and read-only attributes, along with detailed descriptions of nested schemas. The hierarchical organization of nested conditions is particularly well done.
internal/provider/utils_test.go (3)
13-79: Thorough test cases for the Slugify functionThe test cases cover a wide range of scenarios including simple strings, special characters, spaces, and edge cases. This comprehensive approach ensures the robustness of the slugification process across various inputs.
81-122: Good boundary testing for slug length validationThe test properly verifies both valid and invalid slug lengths, including edge cases like empty strings and the maximum allowed length. The test structure with clear case names and expected error flags makes the test intent obvious.
124-135: Important null value test for validatorTesting the validator's behavior with null values is essential, as this is a common case in Terraform where optional attributes might not be set. Good defensive testing approach.
docs/resources/environment.md (1)
1-94: Well-documented environment resource with clear constraintsThe documentation comprehensively covers all aspects of the environment resource, with clear separation between required, optional, and read-only attributes. The constraint between
resource_filterandresource_filter_idis explicitly mentioned, which is important for users to understand.The nested schema documentation for resource filters is detailed and consistent with the standalone resource filter documentation, which maintains a cohesive user experience.
testing/testutils/config.go (4)
19-22: Provider factory setup looks goodThe
testAccProtoV6ProviderFactoriesvariable is correctly set up to instantiate the provider during acceptance testing.
24-31: Well-structured test case wrapperGood approach to centralize the provider configuration and pre-check function. This will help maintain consistency across tests.
44-58: CheckResourceExists function implementation is correctThe function properly verifies both the existence of the resource and that it has a valid ID.
60-67: ConfigCompose implementation is simple and effectiveThe function provides a clean way to combine multiple Terraform configuration blocks.
examples/resources/ctrlplane_resource_filter/main.tf (2)
17-33: Resource filter definition looks goodThe resource filter is well structured with clear conditions for filtering StatefulSet resources that contain "postgres" in their name.
71-103: Inline filter example is well constructedThe inline filter example demonstrates an alternative approach with an "or" operator and multiple conditions, providing a good contrast to the referenced filter approach.
internal/provider/environment_schema.go (3)
45-51: Good use of plan modifiers for resource_filter_idThe
stringplanmodifier.RequiresReplace()plan modifier correctly indicates that changes to this attribute require resource replacement, which is important for maintaining proper resource lifecycle.
52-60: Well-structured inline resource filter attributeThe
resource_filterattribute is properly defined as a single nested attribute with appropriate plan modifiers. The comment explaining that changes to an inline filter force recreation is helpful for understanding the behavior.
70-140: Data source schema implementation is comprehensiveThe data source schema provides all necessary attributes with appropriate descriptions and computed flags. The nested structure for resource filters is particularly well-defined.
GNUmakefile (3)
15-17: Good addition of client code generation targetAdding a specific target for client code generation helps maintain clarity in the build process.
28-42: Improved acceptance test configuration with environment variable handlingThe enhanced
testacctarget correctly checks for the presence of a.envfile and sources variables from it, providing better error messages when configuration is missing.
44-62: Useful quiet test mode for acceptance testsThe
testacc-quiettarget provides a less verbose option for running tests, which is helpful for local testing.internal/integration/environment_test.go (1)
1050-1055: Helper function aligns with previous feedback on system list and cleanup.Using
safeDeleteTestSystemto remove test systems after each scenario appears to address the need for top-level CRUD operations on systems mentioned in earlier feedback. This approach ensures consistent cleanup and prevents stale resources from persisting. No concerns here.internal/provider/environment_resource_test.go (1)
17-139: Extensive coverage for environment resource behavior.This suite thoroughly validates environment configurations under varied scenarios (simple and complex filters, name/kind filtering, etc.), ensuring robust test coverage. Including disabled tests for unsupported operations is a good way to document future enhancements.
internal/integration/integration_suite_test.go (1)
79-98: Autocleanup mechanism is well-implemented.Automatically cleaning up resources from prior test runs when
TestModeAutoCleanupis active is an effective strategy to keep the test environment consistent without manual intervention.internal/integration/test_helpers_test.go (2)
19-22: Ensure API key is never logged.
It’s good practice to confirm that the API key is not inadvertently exposed in logs. Although this function itself doesn’t log the key, be mindful of any future logging statements referencing request headers.
138-150:✅ Verification successful
Review flexibility of API host URL.
Appending “/api” may not be desired in every scenario, especially if the user or integration tests rely on a different base path. Ensure this convention is aligned with the rest of the system.Would you like to confirm that all tests and references assume “/api” as the correct path? If so, run:
🏁 Script executed:
#!/bin/bash rg -A 3 '/api'Length of output: 2318
API Host URL Suffix Usage Verified
After reviewing the grep results, it’s clear that the system consistently enforces the “/api” suffix across multiple components (e.g., in both
internal/integration/test_helpers_test.goandinternal/provider/provider.go). This confirms that the tests and provider configuration rely on “/api” as the base path.
- The function in
internal/integration/test_helpers_test.goappends “/api” if it isn’t present.- A similar pattern is observed in
internal/provider/provider.go.- Comments in
client/client.gen.gofurther indicate that a relative path is expected to be appended to the given server URL.If this behavior is intentional, the current approach is acceptable. However, if flexibility for alternative base paths is desired, consider parameterizing the API host suffix.
internal/provider/environment_data_source.go (1)
63-66: Consider allowing alternative identification strategies.
Currently, bothsystem_idandnameare strictly required. However, there may be use cases where a user wants to reference the environment by a unique environment ID alone or another identifier. Consider offering an alternative field or a fallback logic, enabling more flexible identification strategies.internal/provider/system_resource.go (1)
327-348: Check concurrency edge cases for slug auto-generation.
Slug auto-generation triggers if the field is removed from the configuration. In high-throughput scenarios, two or more processes could attempt updates, potentially leading to collisions or inconsistent states. Consider adding uniqueness checks or more robust concurrency safeguards to avoid conflicts.internal/provider/environment_data_source_test.go (1)
16-35: Tests provide solid coverage of basic environment data source usage.
The steps effectively validate attribute correctness in normal scenarios, demonstrating a well-structured test design.internal/provider/resource_filter_resource_test.go (4)
1-14: No issues found.
15-94: Extensive coverage for resource lifecycle tests.
These steps provide a thorough sequence of create, import, update, and read checks across multiple scenarios (simple filter, name filter, complex conditions, operator changes, etc.). Great job ensuring that each attribute is validated and that import works correctly.
96-113: Proper error handling tests.
Validating error messages for missing and invalid fields demonstrates robust coverage. This ensures that the provider fails gracefully when encountering invalid configurations.
115-202: Config generation helpers look good.
These helper functions systematically generate different filter configurations for test cases, improving readability and maintainability. Consider grouping similar logic if the set of configurations grows larger in the future, but overall this is perfectly acceptable.internal/provider/resource_filter.go (21)
1-15: No issues found.
16-71: Well-structured schema builder.
The functionBuildResourceFilterNestedAttributeselegantly handles both leaf-level filters (when depth=0) and nested filters (depth>0). This is a neat approach for composable, recursive schema definitions.
72-72: No issues found.
73-81: ResourceFilterModel design is clear and flexible.
The typed fields and TFS SDK attributes effectively capture filter data.
82-82: No issues found.
106-106: No issues found.
107-120: Recursive initialization of conditions is correct.
Ensures that subconditions are also properly initialized. This approach reduces null-slice errors in deeper nested filters.
121-121: No issues found.
122-152: Robust filtering logic in ToAPIFilter.
Graceful handling of optional fields likeoperator,key,value, andnotis commendable. Logging statements help debug filter transformations.
153-153: No issues found.
154-164: Good default assignment.
ValidateAPIFilterensures a valid"type"key is always present. Useful fallback behavior.
165-165: No issues found.
166-197: Recursive conversion is neatly implemented.
convertConditionsToAPIFiltereffectively processes nested conditions and gracefully handles empty values for comparison filters. Logging further aids in diagnosing issues.
198-198: No issues found.
199-285: Comprehensive parsing in FromAPIFilter.
The function carefully handles both top-level and nested conditions, populating each field with robust error handling. This is quite thorough.
286-286: No issues found.
287-293: Logical approach for defaulting empty value with comparison filters.
Helps maintain consistent behavior across nested conditions.
294-294: No issues found.
295-306: Maintains consistent nested structure.
PostProcessResourceFilterensures that subconditions are preserved in a final consistent state. Straightforward and effective.
307-307: No issues found.
308-321: DeepFixResourceFilter approach.
The function ensures that comparison filters with subconditions always have an empty string value. This aligns with the API contract, preventing accidental data mismatch.internal/provider/system_resource_test.go (41)
1-25: No issues found.
26-42: Flexible resource config function.
testAccSystemResourceConfigfacilitates dynamic generation of HCL with optional description, making the tests more concise.
43-43: No issues found.
44-51: Valid approach for missing required fields scenario.
testAccSystemResourceConfigMissingRequireddemonstrates a negative test to verify error handling when “name” is omitted.
52-52: No issues found.
53-62: Auto-generation test of slug.
Ensuring the provider gracefully handles omitted slug is an important acceptance test scenario.
63-64: No issues found.
65-67: Simple pointer helper.
stringPtris a handy utility for optional fields in the test config.
68-68: No issues found.
69-81: Duplicate slug scenario.
testAccSystemResourceConfigDuplicateSlugclarifies expected failures when two resources share the same slug.
82-82: No issues found.
83-91: Invalid slug test.
Helps confirm validation logic catches whitespace (and presumably other invalid) slugs.
92-92: No issues found.
93-134: Custom destroy verification.
testAccCheckSystemDestroyensures that the resource is truly removed from the external system. This direct client usage is a good pattern for acceptance tests.
135-135: No issues found.
136-225: Broad coverage in TestAccSystemResource.
Tests creation, import, updates (name, slug, description), and final read validations. The approach is well structured for acceptance testing with multiple resource steps.
226-226: No issues found.
227-271: Error handling coverage.
TestAccSystemResourceErrorHandlingdemonstrates multiple negative test scenarios (missing fields, invalid slug, duplicates). Good to see thorough coverage of potential user misconfigurations.
272-272: No issues found.
273-287: Explicit invalid slug check.
Reinforces slug validation with additional specialized scenario. Clear error expectation is crucial.
288-288: No issues found.
289-315: Robust import error tests.
Covers invalid IDs and scenario with a valid UUID that references a non-existent resource. Solid approach for import coverage.
316-316: No issues found.
317-394: Unit-level check for setSystemResourceData.
Ensures consistent data population with or without descriptions. This granular test is beneficial for diagnosing edge cases in the resource data models.
395-406: Verifying distinct slugs with same name.
testAccSystemResourceConfigSameNameDifferentSlugensures that multiple resources with identical names still function as expected.
407-407: No issues found.
408-437: Same name, different slug acceptance test.
Well-designed scenario to confirm that the provider can handle multiple resources referencing the same name field while enforcing unique slugs.
438-438: No issues found.
439-458: Test for auto-generated slug from complex name.
Validates slugification logic for multi-word names. Nicely covers typical user input patterns.
459-459: No issues found.
460-479: Complex name slugification test.
Ensures spaces become hyphens. A strong validation for user-friendly name patterns.
480-480: No issues found.
481-496: Slug length constraint.
TestAccSystemResourceLongSlugverifies that the provider rejects automatically generated slugs exceeding the 30-character limit, preventing unexpected resource creation.
497-497: No issues found.
498-512: Explicitly testing overlong slug.
TestAccSystemResourceExplicitLongSlugfurther confirms the slug length validation for user-supplied values.
513-513: No issues found.
514-520: Config generator for overlong slug.
A concise function that sets up long slug scenarios.
521-521: No issues found.
522-556: Checking plan for overlong slug update.
TestAccSystemResourceUpdateWithLongSlugensures the plan fails gracefully while preserving the original valid resource. This is a thoughtful approach to acceptance testing.
557-557: No issues found.
558-592: Resource replacement on slug change.
TestAccSystemResourceRecreateOnSlugChangeeffectively demonstrates that altering the slug triggers a new resource, matching expected lifecycle behavior for ephemeral IDs.internal/provider/resource_filter_resource.go (11)
1-3: License Header Looks GoodNo issues found in the new license header.
25-36: Idempotent Registry InitializationThe idempotent design of
InitFilterRegistryis clear and safe. Good use of the mutex to prevent race conditions.
38-51: Deferred Lock ReleaseThe code properly defers lock release and ensures the registry is initialized if needed. This is a solid concurrency handling pattern.
121-130: Resource State ModelThe
ResourceFilterResourceModelfields are well-defined and reflect the filter structure accurately.
132-134: Resource Type NameMetadata sets the resource name to
<ProviderTypeName>_resource_filter—a good, descriptive naming pattern that follows Terraform conventions.
136-173: Schema CompletenessThe schema comprehensively covers all attributes needed for resource filtering. The usage of nested attributes for
conditionsis neatly organized.
175-177: Empty ConfigureLeaving
Configureempty is acceptable if external configuration isn’t needed.
179-239: Validations inCreateThe approach to validating required attributes based on the filter type is logical. Error messages are clear. Just ensure that any future filter types also introduce the necessary validation blocks.
241-270: Re-Registration inReadRe-registering the existing resource filter during
Readensures consistency in the in-memory registry. This is unusual for Terraform but suits a state-only resource approach.
272-304: ID Recalculation inUpdateHashing the filter again in
Updatemay overwrite the ID if attributes are changed, effectively recreating a new registry entry. Ensure that is the intended behavior for your use case.Can you confirm if generating a new ID on every change is the desired outcome? If not, we might need to use the existing ID and only re-hash for new resources.
306-322: Resource DeletionDeleting the resource filter from the in-memory registry is straightforward. Good use of the mutex to avoid concurrency issues.
examples/resources/ctrlplane_environment/main.tf (23)
1-7: Terraform Configuration BlockThe provider installation configuration is straightforward. No issues found here.
9-15: Example System ResourceDefines a system resource for demonstration. Clear naming and slug usage for the example.
17-56: Resource, Output, and Data:comparison_andDemonstrates logical AND conditions for filtering resources. The example is illustrative of multi-condition filtering.
72-119: Resource, Output, and Data:comparison_orDemonstrates logical OR conditions. The example is consistent with the previous block, showcasing alternative logic patterns.
123-155: Resource, Output, and Data:metadata_starts_withShows a starts-with operator usage for region-based filtering. The example is clear and would help users understand string-matching.
157-189: Resource, Output, and Data:metadata_ends_withSimilar approach but for ends-with filtering. Good coverage of string operators.
225-256: Resource, Output, and Data:metadata_regexRegex usage is correct and flexible. Encouraging distinct examples for the portion of the pattern is a good teaching approach.
258-295: Resource, Output, and Data:metadata_nullShowcases an AND condition to find unlabeled resources. Logical approach.
299-330: Resource, Output, and Data:kind_equalsDemonstrates how to filter by an object’s kind. Straightforward usage.
369-400: Resource, Output, and Data:identifier_equalsClear example showing single pod identification by name. No issues found.
402-433: Resource, Output, and Data:identifier_regexRegex usage is correct—matching numeric suffixes. This thoroughly illustrates pattern-based selection.
435-466: Resource, Output, and Data:identifier_starts_withDemonstrates discovery of services by prefix. Straightforward usage.
468-499: Resource, Output, and Data:identifier_ends_withComplements the
starts_withblock for suffix-based filtering.
501-532: Resource, Output, and Data:identifier_containsIllustrates partial matches for caching resources. Helpful example.
569-600: Resource, Output, and Data:created_at_afterSimilar to
created_at_before. Recommending the same approach for parameterization.
635-666: Resource, Output, and Data:created_at_after_or_onSame feedback about specifying inclusivity if needed. Otherwise, nicely consistent with the other date-based filters.
670-701: Resource, Output, and Data:last_sync_beforeA time-based filter using last-sync timing. Good demonstration for maintenance or drift detection usage.
703-734: Resource, Output, and Data:last_sync_afterMirrors
last_sync_beforeusage with anafteroperator. Example is correct.
736-767: Resource, Output, and Data:last_sync_before_or_onAligns with the other
before-or-onusage for sync times.
769-800: Resource, Output, and Data:last_sync_after_or_onDemonstrates a complete set of time-based operators for sync times.
804-846: Resource, Output, and Data:production_postgresAn AND-based filter combining multiple domains (kind + name). Good real-world demonstration.
848-906: Resource, Output, and Data:arm_nonprodNested AND/OR conditions to find target resources. This is a robust demonstration of combining multiple operations.
908-950: Resource, Output, and Data:new_high_memoryDemonstrates multi-condition filtering for high-memory instance types. Consistency looks good.
Summary by CodeRabbit
New Features
Improvements
Documentation
Chores