fix: counts api response structure change#1616
fix: counts api response structure change#1616nikhilsinhaparseable merged 1 commit intoparseablehq:mainfrom
Conversation
WalkthroughThe Changes
Sequence DiagramsequenceDiagram
participant Client
participant Handler as HTTP Handler
participant SQL as SQL Engine
participant Convert as JSON/Record Converter
participant Process as Record Post-Processor
Client->>Handler: GET counts (with group_by, conditions)
Handler->>SQL: Execute counts query (includes group_by columns)
SQL-->>Handler: Return RecordBatch with results
Handler->>Convert: Convert RecordBatch to JSON
Convert-->>Handler: JSON records
Handler->>Process: Parse records to CountsRecord objects
Process->>Process: Normalize timestamps (+00:00)
Process-->>Handler: Normalized CountsRecord objects
Handler->>Convert: Construct new RecordBatch<br/>(start_time, end_time, count)
Convert-->>Handler: Normalized RecordBatch
Handler->>Convert: Convert back to JSON
Convert-->>Handler: Normalized JSON records
alt group_by columns present
Handler->>Process: Enrich records with group-by values<br/>(copy from original JSON)
Process-->>Handler: Enriched count records
end
Handler-->>Client: Return counts response
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/handlers/http/query.rs`:
- Around line 390-396: The code currently allows group_by columns (group_by_cols
/ body.conditions.group_by) to include reserved synthesized keys ("start_time",
"end_time", "count", "end_time") which later causes rec.insert(...) to clobber
normalized count payload and duplicate keys in fields; add validation after
constructing group_by_cols (before using has_groups/rec.insert) to detect any
intersection with the reserved set {"start_time","end_time","count"} and return
a clear BadRequest/validation error rejecting the request (or alternatively
alias those names) so collisions cannot reach rec.insert; reference
group_by_cols, body.conditions.group_by, has_groups, rec.insert and fields when
implementing the check and error path.
- Around line 424-446: The serde_json::from_value(...) and
RecordBatch::try_from_iter(...) error paths should be mapped to a 5xx internal
error instead of client errors; change the serde_json::from_value(...) call
(producing CountsRecord) to map_err into the QueryError variant used for
server/internal failures (e.g., QueryError::InternalError) and likewise change
the .map_err on RecordBatch::try_from_iter(...) to the same 5xx-mapped
QueryError variant, preserving the original error text in the message; update
the two call sites (serde_json::from_value and RecordBatch::try_from_iter) to
use the internal-error QueryError variant rather than QueryError::SerdeJsonError
or QueryError::CustomError so server-side transform failures return 5xx.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: f04d7a6a-a3c4-4dd0-b041-eebffc40795a
📒 Files selected for processing (1)
src/handlers/http/query.rs
Summary by CodeRabbit