Mark hypertable and chunk as user catalog tables#9410
Mark hypertable and chunk as user catalog tables#9410zilder wants to merge 1 commit intotimescale:mainfrom
hypertable and chunk as user catalog tables#9410Conversation
|
@dbeck, @melihmutlu: please review this pull request.
|
Codecov Report✅ All modified and coverable lines are covered by tests. 📢 Thoughts on this report? Let us know! |
2ff8a17 to
72f561c
Compare
|
What do you intent to use this for? Why only those 2 tables? |
svenklemm
left a comment
There was a problem hiding this comment.
I think we need some serious testing to check the implications of this change. This looks innocent at first glance but does have implications.
Setting ALTER TABLE foo SET (user_catalog_table = true):
1. WAL logging changes: The table gets full-page WAL logging during checkpoints, same as system catalog tables. This means more WAL volume but better crash safety.
2. pg_dump behavior: The table is treated as a catalog for dump purposes — its schema is dumped but not its data by default (like pg_class itself).
3. MVCC snapshot behavior: Queries on the table use SnapshotNow-like catalog snapshot semantics in some code paths, meaning they can see recently committed rows even within a transaction that started earlier.
4. Vacuum/freeze behavior: The table follows system catalog freezing rules — it may be frozen more aggressively (controlled by vacuum_freeze_min_age catalog defaults).
5. Cache invalidation: Does not automatically participate in syscache invalidation — that's only for actual system catalogs.
6. Logical replication: The table may be excluded from logical replication since it's treated as a catalog table rather than user data.
7. During Hot Standby, writes to user catalog tables are blocked just like system catalogs — the standby treats them as read-only catalog data.
|
Hi @svenklemm,
I'm working on a custom timescaledb-aware logical decoding plugin based on I have a functional PoC. Will need more time to go though your list.
I cannot confirm this. I was able to see all catalog modifications in test_decoding stream: create table test2 (key int, val int, device_id int);
select create_hypertable('test2', by_range('key', 2000));
insert into test2 values (1, 1, 1);
Not sure I understand the problem. The writes are blocked on standby either way for all tables (with exception for hint bits probably). |
|
I took a deeper look into the postgres code regarding
The only difference between regular tables and catalog tables is that when tuple visibility is calculated the slot's So the vacuum/freeze behavior for catalog tables is actually more conservative than for regular tables.
bool isCatalogRel; /* to handle recovery conflict during logical
* decoding on standby */I couldn't find anything criminal there, just extra logical decoding related stuff.
This is an actual limitation. Commands like alter table test_catalog alter column val type int8;
ERROR: cannot rewrite table "test_catalog" used as a catalog tableAs far as I understand (at least what I've seen in the past) when a tsdb catalog table requires changes, the table is recreated in the upgrade script, not altered. So shouldn't be a problem?
Example: insert into test_catalog values (1, 50) on conflict (id) do update set val = EXCLUDED.val;
ERROR: ON CONFLICT is not supported on table "test_catalog" used as a catalog table |
Cannot confirm this either. Here's an excerpt from a --
-- Data for Name: hypertable; Type: TABLE DATA; Schema: _timescaledb_catalog; Owner: zilder
--
COPY _timescaledb_catalog.hypertable (id, schema_name, table_name, associated_schema_name, associated_table_prefix, num_dimensions, chunk_sizing_func_schema, chunk_sizing_func_name, chunk_target_size, compression_state, compressed_hypertable_id, status) FROM stdin;
17 _timescaledb_internal _compressed_hypertable_17 _timescaledb_internal _hyper_17 0 _timescaledb_functions calculate_chunk_interval 0 2 \N 0
16 public test _timescaledb_internal _hyper_16 1 _timescaledb_functions calculate_chunk_interval 0 1 17 0
21 public test2 _timescaledb_internal _hyper_21 1 _timescaledb_functions calculate_chunk_interval 0 0 \N 0
\.
--
-- Data for Name: bgw_job; Type: TABLE DATA; Schema: _timescaledb_catalog; Owner: zilder
--
COPY _timescaledb_catalog.bgw_job (id, application_name, schedule_interval, max_runtime, max_retries, retry_period, proc_schema, proc_name, owner, scheduled, fixed_schedule, initial_start, hypertable_id, config, check_schema, check_name, timezone) FROM stdin;
\.
--
-- Data for Name: chunk; Type: TABLE DATA; Schema: _timescaledb_catalog; Owner: zilder
--
COPY _timescaledb_catalog.chunk (id, hypertable_id, schema_name, table_name, compressed_chunk_id, status, osm_chunk, creation_time) FROM stdin;
20 17 _timescaledb_internal compress_hyper_17_20_chunk \N 0 f 2026-03-19 11:51:35.920006+01
15 16 _timescaledb_internal _hyper_16_15_chunk 20 9 f 2026-03-02 17:27:41.149408+01
23 21 _timescaledb_internal _hyper_21_23_chunk \N 0 f 2026-03-19 14:52:26.618244+01
\. |
72f561c to
4badc45
Compare
Mark tables `_timescaledb_catalog.hypertable` and `_timescaledb_catalog.chunk` with `WITH (user_catalog_table = true)` so they can be accessed during logical decoding using historic snapshot. This is required for a consistent view of timescaledb catalog from logical decoding plugins.
4badc45 to
9e16261
Compare
Mark tables
_timescaledb_catalog.hypertableand_timescaledb_catalog.chunkwithWITH (user_catalog_table = true)(see doc) so they can be accessed during logical decoding using historic snapshot. This is required for a consistent view of timescaledb catalog from logical decoding plugins.