feat: Extend \dt psql command output with shard metadata (#709)#812
Open
Adi-Goll wants to merge 1 commit intopgdogdev:mainfrom
Open
feat: Extend \dt psql command output with shard metadata (#709)#812Adi-Goll wants to merge 1 commit intopgdogdev:mainfrom
\dt psql command output with shard metadata (#709)#812Adi-Goll wants to merge 1 commit intopgdogdev:mainfrom
Conversation
Adi-Goll
commented
Mar 5, 2026
| self.seen_tables.insert(table_lookup.to_string()); | ||
|
|
||
| let mut new_col = String::new(); | ||
| for (i, val) in map[table_lookup].iter().enumerate() { |
Contributor
Author
There was a problem hiding this comment.
I was thinking, maybe it would be better to have a more unique key for the HashMap, maybe instead of just the table name I should use the schema + table name? Thoughts?
Adi-Goll
commented
Mar 5, 2026
| pending_explain: None, | ||
| begin_stmt: None, | ||
| router: Router::default(), | ||
| seen_tables: HashSet::new(), |
Contributor
Author
There was a problem hiding this comment.
Not sure if it's reasonable to add this for \dt because the original command doesn't do any aggregating or deduplication handling
f94ae29 to
23d3571
Compare
) Intercept \dt command, and add a `Shard` column to the output. Add a flag to `Route` that indicates if \dt is being executed so the Shard column is conditionally applied. Add `shard_map` HashMap to `Route` as well that stores tables with their corresponding shard. Introduce `forward_with_shard` function in backend/pool/connection/binding.rs that exposes the shard_map property to be streamed in the query engine. Add engine logic to populate the new column correctly and handle tables sharded across multiple databases Ex. output: List of tables Schema | Name | Type | Owner | Shard --------+-----------+-------+--------+--------- public | only_on_0 | table | ubuntu | 0 public | only_on_1 | table | ubuntu | 1 public | only_on_2 | table | ubuntu | 2 public | users | table | ubuntu | 0, 1, 2 Signed-off-by: Aditya Gollamudi <adigollamudi@gmail.com>
23d3571 to
dcbd665
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR intercepts the psql
\dtcommand, and add aShardcolumn to the output.First a flag to
Routethat indicates if\dtis being executed so the Shard column is conditionally applied.A
shard_mapHashMap is also added toRoutethat stores table names with their corresponding shard.This PR also introduces a
forward_with_shard()function inbackend/pool/connection/binding.rsthatexposes the shard_map property which provides the correct shards be streamed from the query engine.
Finally engine logic was added to populate the new column correctly and handle tables sharded
across multiple databases. The following output uses this config file for testing:
pgdog.toml
Note, the same users table was created on multiple databases, so some deduplication logic was added during streaming, done through an added HashSet property in the QueryEngineContext.
After this PR: